Category: Hosting

  • Prison Fellowship’s migration to Cloudflare: Choosing Between Pro and Business Plans

    Prison Fellowship’s migration to Cloudflare: Choosing Between Pro and Business Plans

    As a long-time advocate of Cloudflare, I have relied on their services for my personal DNS hosting for several years. Their robust features and user-friendly interface have made my experience seamless and efficient. Recently, I took on the task of migrating all of Prison Fellowship’s DNS to Cloudflare, which presented a new challenge: deciding between their Pro and Business level plans. After extensive research and several enlightening sessions with ChatGPT, I concluded that the Pro plan would adequately meet our needs for the time being.

    If you find yourself grappling with a similar decision, I’d like to share my thought process regarding the Business plan and why I ultimately leaned towards the Pro plan.

    1. Support Availability

    One of the most significant advantages of the Business plan is the 24/7/365 support. This feature is particularly beneficial for organizations that operate e-commerce sites or have critical applications that require constant uptime. With round-the-clock support, any issues can be addressed immediately, minimizing potential downtime and ensuring a smooth user experience. However, for Prison Fellowship, our current operations do not necessitate this level of support.

    2. Web Application Firewall (WAF) Rules

    The Business plan includes a more extensive Web Application Firewall (WAF) with 50 rules, compared to the 20 rules offered in the Pro plan. This is a crucial consideration for organizations that need robust security measures to protect against various online threats, such as SQL injection and cross-site scripting. Since we were migrating from the Free plan, I wanted to assess whether the 20 WAF rules would adequately meet our security needs before committing to the more expensive Business plan. This approach allows us to evaluate our security posture without incurring unnecessary costs upfront.

    3. Custom SSL Certificates

    Another feature exclusive to the Business plan is the ability to use custom SSL certificates. While this option is valuable for many organizations that require specific branding or compliance needs, it was not a necessity for us at this time. The Universal SSL provided by Cloudflare offers robust security for our website, and we are currently satisfied with this level of protection. As our organization grows and our needs evolve, we can always revisit the option of custom SSL certificates if required.

    4. Detailed Analytics

    The Business plan also provides access to more detailed analytics, which can be beneficial for organizations looking to gain deeper insights into their web traffic and performance metrics. While I was intrigued by the prospect of having access to more comprehensive data, I ultimately decided that the analytics offered in the Pro plan would be sufficient for our initial needs. The Pro plan still provides valuable insights that can help us monitor our website’s performance and make informed decisions without overwhelming us with data.

    5. Advanced DDoS Protection

    One of the most compelling features of the Business plan is its advanced DDoS protection. This is particularly crucial for organizations that may be at risk of targeted attacks, as it helps safeguard against service disruptions. While this was a significant selling point for the Business plan, I wanted to first evaluate how the Pro plan performed in terms of security and traffic management. By starting with the Pro plan, we can monitor our website’s performance and security measures before deciding if we need the enhanced DDoS protection offered by the Business plan.

    Wrap-Up

    In conclusion, after carefully weighing the features and benefits of both the Pro and Business plans, I determined that the Pro plan would serve our needs at Prison Fellowship for the time being. It offers a solid foundation of security and performance features without the higher cost associated with the Business plan.

    • Pro Plan: Ideal for small to medium-sized businesses that require enhanced security and performance features without the higher price tag. It provides essential tools to manage DNS effectively while keeping costs manageable.
    • Business Plan: Tailored for larger organizations or those with more complex needs, offering advanced features, better support, and higher limits. This plan is perfect for businesses that require constant uptime and extensive security measures.

    As we grow and our requirements evolve, I remain open to revisiting the Business plan to take advantage of its additional capabilities. If you’re facing a similar decision, I hope my thought process helps guide you in choosing the right Cloudflare plan for your organization.

  • Traefik Adventures

    Traefik Adventures

    At work I was looking into ways to decrease our AWS Public IP usage. We, along with the rest of the world were hit with monthly cost of using too many IP addresses. And it was not a total surprise since AWS announced this was coming, the price tag was a bit of shock though as I hadn’t realized how many Public IP’s we were using.

    So I starting thinking through the problem and thought, well what if we routed our traffic through a single load balancer and then hit some sort of internal load balancer to route traffic to our various apps and whatnot. So after a little bit of searching I decided to check out Traefik as it seems to have the features that I think I’ll need.

    I have never used Traefik before so I decided to try it in my home lab, switching out Nginx Proxy Manager.

    Full disclaimer: NPM works well and was simple to use, I’m not bashing it here. I did have some minor issues that annoyed me, like trying to store its config in git. I’m sure there are ways to do it, I tried terraform but it never worked the way I thought it should. But I wanted to try out Traefik prior to talking about it at work, so here we are.

    Traefik Overview

    So let me pause a moment here to talk about how Traefik works, there are lots of posts out there about this topic but I will say none of them did a great job of describing the architecture of Traefik.

    Static Config

    The Static Config (traefik.yaml or traefik.toml) describes the global settings like logging and if the dashboard / API are enabled. Ingress is setup here, so if you want 443 or 80 open you do that here and you give them a name like HTTP or web. This is also where you setup lets encrypt settings, in my case I wanted to do DNS verification and so I’ve got those configs set for Cloudflare.

    log:
      level: WARN
      filepath: "/etc/traefik/log/traefik.log"
    accessLog:
      filePath: "/etc/traefik/log/access.log"
    api:
      dashboard: true                             # Enable the dashboard
      #insecure: true
    
    # Certificate Resolvers are responsible for retrieving certificates from an ACME server
    # See https://doc.traefik.io/traefik/https/acme/#certificate-resolvers
    certificatesResolvers:
      letsencrypt:
        acme:
    #      caServer: https://acme-staging-v02.api.letsencrypt.org/directory
          email: "[email protected]"  # Email address used for registration
          storage: "/etc/traefik/acme/acme.json"    # File or key used for certificates storage
          #tlsChallenge: {}
          dnsChallenge:
            provider: cloudflare
    
    
    
    entryPoints:
      http:
        address: ":80"                            # Create the HTTP entrypoint on port 80
        http:
          redirections:                           # HTTPS redirection (80 to 443)
            entryPoint:
              to: "https"                         # The target element
              scheme: "https"                     # The redirection target scheme
      https:
        address: ":443"                           # Create the HTTPS entrypoint on port 443
    
    global:
      checknewversion: true                       # Periodically check if a new version has been released.
      sendanonymoususage: true                    # Periodically send anonymous usage statistics.
    
    providers:
      docker:
        endpoint: "unix:///var/run/docker.sock"   # Listen to the UNIX Docker socket
        exposedByDefault: false                   # Only expose container that are explicitly enabled (using label traefik.enabled)
        # network: "traefik-net"                    # Default network to use for connections to all containers.
        # swarmmode: true                           # Activates the Swarm Mode (instead of standalone Docker).
        # swarmModeRefreshSeconds: 15               # Defines the polling interval (in seconds) in Swarm Mode.
        # watch: true                               # Watch Docker Swarm events
      file:
        directory: "/etc/traefik/config"     # Link to the dynamic configuration
        watch: true                               # Watch for modifications
      providersThrottleDuration: 10               # Configuration reload frequency
    

    Dynamic Config

    The Dynamic Config (config/file.yaml) is what happens next and is also dynamic in nature, so add a docker container and Traefik adds the routes and grabs a certificate. In my case I was manually configuring services in a file, I do this because not everything I’m running is on docker on the same host (looking at you mailcow!). This did give me a lot of flexibility to route things exactly the way I wanted and my config is stored in GitHub!

    http:
      # Add the router
      routers:
        dns1:
          entryPoints:
            - https
          tls:
            certresolver: letsencrypt
            options: "modern@file"
          service: dns1
          Rule: "Host(`hostname.mydomain.com`)"
          middlewares:
           - "default@file"
      # Add the service
      services:
        dns1:
          loadBalancer:
            serversTransport: nossl
            servers:
              - url: https://internal.ip
      serversTransports:
        nossl:
          #required if using self signed certs internally
          insecureSkipVerify: true
    

    Once I got that lined up it was easy to then expand on this and move my other hosts behind Traefik.

    Conclusion

    I’ve only been running Traefik for a couple days now but I’m impressed with what it can do out of the box. I like that it requires file config rather than a gui which forces me to put things in version control. It’s really fast too, which is what I expected since it’s a single go binary. I installed it instead of running it in docker as again it’s a single binary so docker felt like a lot of overhead for just running a single binary. Will I be keeping Traefik? At this point yes, I think I like it better. It’s a steeper learning curve to get started but now that I kind of get it I think it’s going to be a more powerful tool. Tomorrow I’ll be looking into using Traefik at work to see what it would take to setup Traefik as an ingress controller for our Kubernetes cluster. I think that using it could allow us to reduce our need for AWS ALB’s and public IPs by having a single load balancer direct all traffic to Traefik.

  • One Year of Mailcow

    One Year of Mailcow

    I’ve been hosting my personal domain’s email on Mailcow for over a year now after Google apps started charging for their service and I have to say it works pretty good. I had an good architecture to start but needed to iterate on the design of the infrastructure. A few things that changed was I did swap out EFS for a 2nd EBS data volume that was dynamically attached at EC2 boot time. I moved my s3 back backups into glacier to reduce costs. And I did end up needing to upgrade my Ec2 to larger instance, I still need to revisit the metrics on this to determine if it was really necessary. But you know how it is when you break something and the family is using it… you hear about it.

    AWS Hosting

    I do host this on AWS, my reasoning was just keeping my skills sharp. I had originally spin up the stack with Cloudformation to test out some of the latest changes offered in cloudformation but have since converted those scripts to Terraform. Terraform is so simple… there’s no comparison. This is not the cheapest solution, I could host this anywhere or at home but I chose to put it here to continue honing my skills in AWS. Also let’s be honest, AWS is a really solid host.

    Mailcow Pros:

    • It’s stupid simple to update, they have a script that will pull the latest changes from git, pull docker images, restart services and then clean up after itself.
    • It just works, I’ve had no real problems other than one’s I’ve created. If you leave it alone it just runs.
    • Backup and restore works. I’ve only done full backups and restores so I can’t comment on restoring individual messages but I can spin up a empty ec2 instance and bring up my server quickly with a restore from S3.

    Mailcow Cons:

    • It’s a bit bloated, there’s some included things that may not be really needed. Like for instance I like the activesync for my mobile device but honestly I could probably just use IMAP idle.
    • SOGO is ugly and we did have some issues with the calendar. It’d be nice if there was a better solution. I know there is an option to use Nextcloud but I haven’t played with that yet.
    • Documentation could use some work, there were some places that I had to do some extra research and guessing when I was building out my solution.

    Conclusion

    If you want to host your own email, Mailcow just works. There are other less resource intensive solutions out there that have good reviews too, I suggest trying them out and pick what works for you. Now with hind sight being 20/20 would I self host email again? I think so? I’ve learned a lot about email and specifically DKIM and SPF records (I’ll do a whole post about those) and so that’s been a good growing experience. I haven’t lost any email (knock on wood) so that’s good. And honestly the server does just work.

  • Replacing Google Workspaces

    Replacing Google Workspaces

    So with the announcement that my freeloading for email hosting on Google is coming to an end I decided to go down the road of setting up my own email server as I figured since I was going to have to pay for email going forward why not just host it. Is this a good idea? I’m not sure, it seems like there are lots of blog posts that tell you not to host your own email but in my situation I have several custom domains that i was feeding into google with around 20 family freeloaders users and cost became an issue.

    Most of my cloud expertise is in AWS which made it a pretty easy decision to use their services to host my server. Since mailcow is now dockerized this was fairly easy to create an ASG that has one EC2 server. I hosted the data on EFS which allows me to kill off the EC2 instance and it will rebuild itself within a few minutes. And to top it off I am using SES for outbound emails as this allows me to avoid getting my sent emails trapped in spam filters.

    So far there have been very few gotchas, the issues that have come up were my own doing for over thinking the problem. Although once the family starts using the server I’m sure that I’ll need to iterate. My only real fear is loosing data and I’ve been able to test that a few times by completely tearing down the stack and rebuilding and reloading from a backup.

    Next step is I’ll be working on importing the families emails into the server and mailcow uses imapsync under the hood and actually getting some real traffic beyond myself and my emails.