Author: Kevin

  • Prison Fellowship’s migration to Cloudflare: Choosing Between Pro and Business Plans

    Prison Fellowship’s migration to Cloudflare: Choosing Between Pro and Business Plans

    As a long-time advocate of Cloudflare, I have relied on their services for my personal DNS hosting for several years. Their robust features and user-friendly interface have made my experience seamless and efficient. Recently, I took on the task of migrating all of Prison Fellowship’s DNS to Cloudflare, which presented a new challenge: deciding between their Pro and Business level plans. After extensive research and several enlightening sessions with ChatGPT, I concluded that the Pro plan would adequately meet our needs for the time being.

    If you find yourself grappling with a similar decision, I’d like to share my thought process regarding the Business plan and why I ultimately leaned towards the Pro plan.

    1. Support Availability

    One of the most significant advantages of the Business plan is the 24/7/365 support. This feature is particularly beneficial for organizations that operate e-commerce sites or have critical applications that require constant uptime. With round-the-clock support, any issues can be addressed immediately, minimizing potential downtime and ensuring a smooth user experience. However, for Prison Fellowship, our current operations do not necessitate this level of support.

    2. Web Application Firewall (WAF) Rules

    The Business plan includes a more extensive Web Application Firewall (WAF) with 50 rules, compared to the 20 rules offered in the Pro plan. This is a crucial consideration for organizations that need robust security measures to protect against various online threats, such as SQL injection and cross-site scripting. Since we were migrating from the Free plan, I wanted to assess whether the 20 WAF rules would adequately meet our security needs before committing to the more expensive Business plan. This approach allows us to evaluate our security posture without incurring unnecessary costs upfront.

    3. Custom SSL Certificates

    Another feature exclusive to the Business plan is the ability to use custom SSL certificates. While this option is valuable for many organizations that require specific branding or compliance needs, it was not a necessity for us at this time. The Universal SSL provided by Cloudflare offers robust security for our website, and we are currently satisfied with this level of protection. As our organization grows and our needs evolve, we can always revisit the option of custom SSL certificates if required.

    4. Detailed Analytics

    The Business plan also provides access to more detailed analytics, which can be beneficial for organizations looking to gain deeper insights into their web traffic and performance metrics. While I was intrigued by the prospect of having access to more comprehensive data, I ultimately decided that the analytics offered in the Pro plan would be sufficient for our initial needs. The Pro plan still provides valuable insights that can help us monitor our website’s performance and make informed decisions without overwhelming us with data.

    5. Advanced DDoS Protection

    One of the most compelling features of the Business plan is its advanced DDoS protection. This is particularly crucial for organizations that may be at risk of targeted attacks, as it helps safeguard against service disruptions. While this was a significant selling point for the Business plan, I wanted to first evaluate how the Pro plan performed in terms of security and traffic management. By starting with the Pro plan, we can monitor our website’s performance and security measures before deciding if we need the enhanced DDoS protection offered by the Business plan.

    Wrap-Up

    In conclusion, after carefully weighing the features and benefits of both the Pro and Business plans, I determined that the Pro plan would serve our needs at Prison Fellowship for the time being. It offers a solid foundation of security and performance features without the higher cost associated with the Business plan.

    • Pro Plan: Ideal for small to medium-sized businesses that require enhanced security and performance features without the higher price tag. It provides essential tools to manage DNS effectively while keeping costs manageable.
    • Business Plan: Tailored for larger organizations or those with more complex needs, offering advanced features, better support, and higher limits. This plan is perfect for businesses that require constant uptime and extensive security measures.

    As we grow and our requirements evolve, I remain open to revisiting the Business plan to take advantage of its additional capabilities. If you’re facing a similar decision, I hope my thought process helps guide you in choosing the right Cloudflare plan for your organization.

  • Heel Drop in Running Shoes

    Heel Drop in Running Shoes

    One of the things I didn’t know about when I began running and shopping for shoes is a small but crucial factor: heel drop—the difference in height between the heel and forefoot. While traditional running shoes often have a higher heel drop (10-12mm), lower-drop shoes (0-6mm) are gaining popularity among runners. But is less heel drop better? Let’s break it down.

    What Is Heel Drop?

    Heel drop, or heel-to-toe offset, measures how much the heel sits higher than the forefoot in a running shoe.

    • High-drop shoes (10-12mm): Promote heel striking and more cushioning in the heel.
    • Mid-drop shoes (5-8mm): Offer a balance between heel and forefoot cushioning.
    • Low-drop shoes (0-4mm): Encourage a more natural foot strike.
    • Zero-drop shoes (0mm): Keep the heel and forefoot level, mimicking barefoot movement.

    Why Less Heel Drop Is Better for Runners

    1. Encourages a Natural Running Gait

    A lower heel drop promotes midfoot or forefoot striking, reducing impact forces on the knees and encouraging a more efficient stride. This is closer to how humans naturally run barefoot, minimizing excess stress on joints.\

    2. Reduces Risk of Knee Injuries

    Studies suggest high-heel-drop shoes increase impact on the knees, which can contribute to conditions like runner’s knee and IT band syndrome. Lower-drop shoes shift some of the load to the calves and ankles, distributing stress more evenly. Higher-drop shoes were linked to increased patellofemoral stress, a key factor in knee pain among runners (Bonacci et al., 2013).

    3. Strengthens Feet and Lower Legs

    Low-drop shoes engage foot muscles, Achilles tendons, and calves more actively, helping to build strength over time. This can improve overall running efficiency and reduce reliance on thick, cushioned footwear.

    4. Better Ground Feel and Stability

    A lower heel drop improves proprioception, or the ability to sense and adjust to terrain changes. This can lead to better balance, more responsive running, and a lower risk of ankle rolls—especially for trail runners.

    5. Enhances Running Efficiency

    By promoting a natural stride and engaging key muscles, low-drop shoes may improve running economy, reducing wasted energy and allowing for a more fluid, powerful movement.

    Best Low and Zero-Drop Running Shoe Brands

    Altra – The Leader in Zero-Drop Shoes

    Altra is known for its fully zero-drop design, allowing for the most natural foot positioning possible. Their shoes also feature a wide toe box, encouraging natural toe splay and comfort.

    Top Picks:

    • Altra Escalante – Great for road runners who want a soft yet responsive ride.
    • Altra Lone Peak – A favorite among trail runners, offering durability and grip.
    • Altra Torin – A cushioned zero-drop option for long-distance runners.

    Hoka – Low-Drop with Max Cushioning

    Hoka offers low-drop shoes (4-5mm) with high cushioning, making them a great option for those transitioning from higher-drop shoes.

    Top Picks:

    • Hoka Clifton – A soft, lightweight daily trainer with a 5mm drop.
    • Hoka Speedgoat – Ideal for trail runners who want grip and cushion.
    • Hoka Mach – A responsive, fast shoe with a low 5mm drop.

    Transitioning to a Lower Heel Drop Safely

    Switching from a high-drop to a low-drop shoe requires patience to avoid injury. Here’s how to transition safely:

    • Start gradually – Begin with short runs in lower-drop shoes to allow muscles and tendons to adapt.
    • Strengthen your calves – Since lower-drop shoes put more stress on the Achilles and calves, add calf raises and mobility exercises to your routine.
    • Listen to your body – Soreness is normal, but sharp pain is a red flag. Adjust your mileage as needed.
    • Rotate shoes – Use a mix of different heel drops to prevent overuse injuries.

    Should You Go Zero Drop?

    While zero-drop shoes can offer the most natural running experience, they’re not for everyone. Runners with stiff ankles, past Achilles injuries, or a history of calf issues may prefer a slight drop (4-6mm) for added comfort.

    Final Thoughts

    Lower heel-drop running shoes promote natural movement, reduce knee stress, strengthen muscles, and improve efficiency. I personally have chosen to run in Altra Lone Peak on the trails and Hoka Cliftons on the road. Whether you go zero-drop with Altra or prefer the low-drop cushioning of Hoka, choosing the right shoe can help you run stronger, stay injury-free, and feel more connected to the ground.

    📖 References:

    • Bonacci, J., Saunders, P. U., Hicks, A., Rantalainen, T., Vicenzino, B., & Spratford, W. (2013). Running in a minimalist shoe increases plantar pressure without modifying running biomechanics. Medicine & Science in Sports & Exercise45(7), 1342–1350.
    • Gruber, A. H., Silvernail, J. F., Brammer, J. D., & Derrick, T. R. (2017). Running economy and mechanics in runners with lower- and higher-arched feet. Sports Biomechanics16(3), 367–380.
    • Miller, E. E., Whitcome, K. K., Lieberman, D. E., Norton, H. L., & Dyer, R. E. (2014). The effect of minimal shoes on arch structure and intrinsic foot muscle strength. The American Journal of Sports Medicine42(6), 1354–1363.
    • Paquette, M. R., Zhang, S., Baumgartner, L. D., & Coe, D. P. (2013). Ground reaction forces and lower extremity biomechanics with different speeds in traditional versus minimalist running shoes. Journal of Biomechanics46(7), 1275–1282.
    • Willy, R. W., & Davis, I. S. (2014). Kinematic and kinetic comparison of running in standard and minimalist shoes. Journal of Sports Sciences32(13), 1277–1285.
  • AWS Aurora vs. Redshift for Data Warehousing

    AWS Aurora vs. Redshift for Data Warehousing

    At work we are looking into moving from a data dumping ground into a real data warehouse solution. So this took me down a rabbit hole of what should we use to host this ever expanding database? Since we are hosting in AWS two commonly considered AWS services for analytical workloads are Amazon Aurora and Amazon Redshift. While both are powerful, they serve different purposes and are optimized for different types of workloads. So to sort out which way to go, here’s a brief overview of the two solutions that helped me work through this decision:

    Understanding Aurora and Redshift

    Amazon Aurora

    Amazon Aurora is a relational database service (RDS) that provides high performance and availability. It is compatible with both MySQL and PostgreSQL, offering managed features such as automated backups, scaling, and replication.

    Amazon Redshift

    Amazon Redshift is a fully managed data warehouse designed for fast querying and analytical processing over large datasets. It is optimized for Online Analytical Processing (OLAP) workloads and integrates deeply with AWS analytics services like AWS Glue and Amazon Quicksight.

    Key Differences

    FeatureAmazon AuroraAmazon Redshift
    TypeRelational Database (OLTP)Data Warehouse (OLAP)
    WorkloadTransactional & Mixed WorkloadsAnalytical & Reporting
    Data StructureRow-basedColumnar-based
    Query PerformanceOptimized for small queries with high concurrencyOptimized for complex queries over large datasets
    ScalabilityScales read replicas horizontally, limited vertical scalingMassively parallel processing (MPP) for high scalability
    Storage ModelReplicated storage across multiple AZsDistributed columnar storage
    Best ForApplications needing high-performance transactionsBusiness Intelligence, Data Lakes, and Analytics

    Which One Should You Choose for Data Warehousing?

    1. Choose Amazon Aurora if:
      • Your workload requires frequent transactions and OLTP-like operations.
      • You need an operational data store with some analytical capabilities.
      • Your dataset is relatively small, and you require real-time access to data.
    2. Choose Amazon Redshift if:
      • Your primary goal is big data analytics.
      • You need to run complex queries over terabytes or petabytes of data.
      • You require a scalable and cost-effective data warehouse with optimized storage and querying.

    Conclusion

    This is a brief blog post that describes the research I went through. My conclusion is Aurora is best for transactional databases and operational reporting and Redshift is purpose-built for data warehousing and analytics. If you need real-time analytics on live transactional data, you might even consider using both together—storing operational data in Aurora and periodically ETL-ing it into Redshift for deeper analysis.

  • Back to WordPress: A Smoother Blogging Experience

    I spent some time today moving my blog back to WordPress, and there were two main reasons for the switch.

    First, I wanted to experiment with some Cloudflare settings in preparation for upcoming discussions with the WordPress admin at work. There’s a lot of potential for optimizing proxy, caching, and bot protection, so I figured it made sense to test things out firsthand.

    Second, WordPress simply makes posting so much easier compared to my previous static site setup. I had been using a Node package to generate static HTML from Markdown, which was cool and free to host on Cloudflare, but it felt clunky and didn’t get updated as often as I’d like.

    Say what you will about WordPress, yes it can be clunky if you add too many plugins and its a little bloated but it does just work.

  • Migrating Kubernetes Containers on AWS from GP2 to GP3

    Migrating Kubernetes Containers on AWS from GP2 to GP3

    At work we have a Stackgres kuberentes cluster that hosts our postgres databases. This allows for high availability, easy data recovery and generally is pretty easy to manage. I admit that when I first started looking at postgres on Kubernetes I was pretty skeptical but it’s honestly given me very little to complain about. It does have some issues due to how the cluster was initially configured that I am planning to fix in the future.

    The K8s cluster was setup with GP2 as the default storage class and so the topic came up a few months ago to migrate to GP3 to increase our IOPs and also reduce cost.

    So I thought that it would be pretty easy to migrate from GP2 to GP3 EBS volumes as I have migrated standard EC2 servers using EBS with a quick CLI script or GUI click. I sent in a ticket to Ongres, the company behind Stackgres to see if they had any guidance on the process. I was expecting again a simple one liner kubectl command or script.

    Instead I received a long procedure and thought I’d document it here…

    1. Make the cluster leader pod “0”. I’m not 100% sure this is needed but it was in my directions. I didn’t test this but I figure it will delete whatever pod is not the leader. But again I didn’t test.
      1. kubectl exec -it -n <<namespace>> <<stackgres_pod_name>> -c patroni -- patronictl list
      2. If needed switchover: kubectl exec -it -n <<namespace>> <<stackgres_pod_name>> -c patroni -- patronictl switchover
    2. Take a backup… take a backup… take a backup! Don’t start this process without a recent backup as you are going to delete volumes.
    3. Set the cluster size to 1, destroying the replica: kubectl edit sgclusters.stackgres.io -n <<namespace>> <<stackgres_cluster_name>>
    4. Use kubectl get pvc to find the volume claim and release it by deleting it.
    5. User kubectl get pv to find the volume and then delete the volume.
    6. Set the cluster size to 2, creating a new replica: kubectl edit sgclusters.stackgres.io -n <<namespace>> <<stackgres_cluster_name>>
    7. watch for the replica to be rebuilt and sync up with the leader: kubectl exec -it -n <<namespace>> <<stackgres_pod_name>> -c patroni -- patronictl list
    8. Once the sync is complete, switchover to the replica and then follow the steps to delete the old leader.

    I admit that I made a mistake at one point and deleted a PVC that was still in use. Thankfully the Ongres team was able to help me recover from that. I’ll document that in a later post.

  • Traefik Adventures

    Traefik Adventures

    At work I was looking into ways to decrease our AWS Public IP usage. We, along with the rest of the world were hit with monthly cost of using too many IP addresses. And it was not a total surprise since AWS announced this was coming, the price tag was a bit of shock though as I hadn’t realized how many Public IP’s we were using.

    So I starting thinking through the problem and thought, well what if we routed our traffic through a single load balancer and then hit some sort of internal load balancer to route traffic to our various apps and whatnot. So after a little bit of searching I decided to check out Traefik as it seems to have the features that I think I’ll need.

    I have never used Traefik before so I decided to try it in my home lab, switching out Nginx Proxy Manager.

    Full disclaimer: NPM works well and was simple to use, I’m not bashing it here. I did have some minor issues that annoyed me, like trying to store its config in git. I’m sure there are ways to do it, I tried terraform but it never worked the way I thought it should. But I wanted to try out Traefik prior to talking about it at work, so here we are.

    Traefik Overview

    So let me pause a moment here to talk about how Traefik works, there are lots of posts out there about this topic but I will say none of them did a great job of describing the architecture of Traefik.

    Static Config

    The Static Config (traefik.yaml or traefik.toml) describes the global settings like logging and if the dashboard / API are enabled. Ingress is setup here, so if you want 443 or 80 open you do that here and you give them a name like HTTP or web. This is also where you setup lets encrypt settings, in my case I wanted to do DNS verification and so I’ve got those configs set for Cloudflare.

    log:
      level: WARN
      filepath: "/etc/traefik/log/traefik.log"
    accessLog:
      filePath: "/etc/traefik/log/access.log"
    api:
      dashboard: true                             # Enable the dashboard
      #insecure: true
    
    # Certificate Resolvers are responsible for retrieving certificates from an ACME server
    # See https://doc.traefik.io/traefik/https/acme/#certificate-resolvers
    certificatesResolvers:
      letsencrypt:
        acme:
    #      caServer: https://acme-staging-v02.api.letsencrypt.org/directory
          email: "[email protected]"  # Email address used for registration
          storage: "/etc/traefik/acme/acme.json"    # File or key used for certificates storage
          #tlsChallenge: {}
          dnsChallenge:
            provider: cloudflare
    
    
    
    entryPoints:
      http:
        address: ":80"                            # Create the HTTP entrypoint on port 80
        http:
          redirections:                           # HTTPS redirection (80 to 443)
            entryPoint:
              to: "https"                         # The target element
              scheme: "https"                     # The redirection target scheme
      https:
        address: ":443"                           # Create the HTTPS entrypoint on port 443
    
    global:
      checknewversion: true                       # Periodically check if a new version has been released.
      sendanonymoususage: true                    # Periodically send anonymous usage statistics.
    
    providers:
      docker:
        endpoint: "unix:///var/run/docker.sock"   # Listen to the UNIX Docker socket
        exposedByDefault: false                   # Only expose container that are explicitly enabled (using label traefik.enabled)
        # network: "traefik-net"                    # Default network to use for connections to all containers.
        # swarmmode: true                           # Activates the Swarm Mode (instead of standalone Docker).
        # swarmModeRefreshSeconds: 15               # Defines the polling interval (in seconds) in Swarm Mode.
        # watch: true                               # Watch Docker Swarm events
      file:
        directory: "/etc/traefik/config"     # Link to the dynamic configuration
        watch: true                               # Watch for modifications
      providersThrottleDuration: 10               # Configuration reload frequency
    

    Dynamic Config

    The Dynamic Config (config/file.yaml) is what happens next and is also dynamic in nature, so add a docker container and Traefik adds the routes and grabs a certificate. In my case I was manually configuring services in a file, I do this because not everything I’m running is on docker on the same host (looking at you mailcow!). This did give me a lot of flexibility to route things exactly the way I wanted and my config is stored in GitHub!

    http:
      # Add the router
      routers:
        dns1:
          entryPoints:
            - https
          tls:
            certresolver: letsencrypt
            options: "modern@file"
          service: dns1
          Rule: "Host(`hostname.mydomain.com`)"
          middlewares:
           - "default@file"
      # Add the service
      services:
        dns1:
          loadBalancer:
            serversTransport: nossl
            servers:
              - url: https://internal.ip
      serversTransports:
        nossl:
          #required if using self signed certs internally
          insecureSkipVerify: true
    

    Once I got that lined up it was easy to then expand on this and move my other hosts behind Traefik.

    Conclusion

    I’ve only been running Traefik for a couple days now but I’m impressed with what it can do out of the box. I like that it requires file config rather than a gui which forces me to put things in version control. It’s really fast too, which is what I expected since it’s a single go binary. I installed it instead of running it in docker as again it’s a single binary so docker felt like a lot of overhead for just running a single binary. Will I be keeping Traefik? At this point yes, I think I like it better. It’s a steeper learning curve to get started but now that I kind of get it I think it’s going to be a more powerful tool. Tomorrow I’ll be looking into using Traefik at work to see what it would take to setup Traefik as an ingress controller for our Kubernetes cluster. I think that using it could allow us to reduce our need for AWS ALB’s and public IPs by having a single load balancer direct all traffic to Traefik.

  • AWS Solution Architect Professional

    AWS Solution Architect Professional

    I had let my AWS Solution Architect Professional certification expire as I didn’t have a lot of spare time during my previous role. So I figured now with my surplus of time I would work on renewing it.

    A Cloud Guru

    For all my AWS certifications so far I had used A Cloud Guru and it worked alright for me so I decided to use their service again this time around. Pluralsight bought them / merged with them sometime in the past few years so they are working on combining the two services. My time training was caught in the middle of this merging of my course which is understandable but also unfortunate as it would be confusing when I would log in and see that they had added new videos or modified quizzes and tests.

    The video content was pretty good. If you have any experience or have taken the associates level test then some of the content will be familiar to you but don’t skip too much of the videos. I would find little nuggets of information that were helpful on the quizzes. The challenges are good brain problems, trying to figure out in your head how you’d respond to a scenario. The demos and labs were ok, some of them I felt were too easy or not detailed enough to really provide help in my training but your milage may vary.

    My Tips

    • A Cloud Guru / Pluralsight offer a playground, use it. Play with all the things you are learning. There are only a few exceptions that you aren’t able to create in the playground, like multi account setups that centralize permissions and logging.
    • Have your own account to play in, there’s nothing like actually building and supporting your own blog or whatever. (I run a mailsesrver).
    • Give yourself lots of time, don’t set your test date too early. But also don’t procrastinate.

    Good Luck!

  • One Year of Mailcow

    One Year of Mailcow

    I’ve been hosting my personal domain’s email on Mailcow for over a year now after Google apps started charging for their service and I have to say it works pretty good. I had an good architecture to start but needed to iterate on the design of the infrastructure. A few things that changed was I did swap out EFS for a 2nd EBS data volume that was dynamically attached at EC2 boot time. I moved my s3 back backups into glacier to reduce costs. And I did end up needing to upgrade my Ec2 to larger instance, I still need to revisit the metrics on this to determine if it was really necessary. But you know how it is when you break something and the family is using it… you hear about it.

    AWS Hosting

    I do host this on AWS, my reasoning was just keeping my skills sharp. I had originally spin up the stack with Cloudformation to test out some of the latest changes offered in cloudformation but have since converted those scripts to Terraform. Terraform is so simple… there’s no comparison. This is not the cheapest solution, I could host this anywhere or at home but I chose to put it here to continue honing my skills in AWS. Also let’s be honest, AWS is a really solid host.

    Mailcow Pros:

    • It’s stupid simple to update, they have a script that will pull the latest changes from git, pull docker images, restart services and then clean up after itself.
    • It just works, I’ve had no real problems other than one’s I’ve created. If you leave it alone it just runs.
    • Backup and restore works. I’ve only done full backups and restores so I can’t comment on restoring individual messages but I can spin up a empty ec2 instance and bring up my server quickly with a restore from S3.

    Mailcow Cons:

    • It’s a bit bloated, there’s some included things that may not be really needed. Like for instance I like the activesync for my mobile device but honestly I could probably just use IMAP idle.
    • SOGO is ugly and we did have some issues with the calendar. It’d be nice if there was a better solution. I know there is an option to use Nextcloud but I haven’t played with that yet.
    • Documentation could use some work, there were some places that I had to do some extra research and guessing when I was building out my solution.

    Conclusion

    If you want to host your own email, Mailcow just works. There are other less resource intensive solutions out there that have good reviews too, I suggest trying them out and pick what works for you. Now with hind sight being 20/20 would I self host email again? I think so? I’ve learned a lot about email and specifically DKIM and SPF records (I’ll do a whole post about those) and so that’s been a good growing experience. I haven’t lost any email (knock on wood) so that’s good. And honestly the server does just work.

  • Replacing Google Workspaces

    Replacing Google Workspaces

    So with the announcement that my freeloading for email hosting on Google is coming to an end I decided to go down the road of setting up my own email server as I figured since I was going to have to pay for email going forward why not just host it. Is this a good idea? I’m not sure, it seems like there are lots of blog posts that tell you not to host your own email but in my situation I have several custom domains that i was feeding into google with around 20 family freeloaders users and cost became an issue.

    Most of my cloud expertise is in AWS which made it a pretty easy decision to use their services to host my server. Since mailcow is now dockerized this was fairly easy to create an ASG that has one EC2 server. I hosted the data on EFS which allows me to kill off the EC2 instance and it will rebuild itself within a few minutes. And to top it off I am using SES for outbound emails as this allows me to avoid getting my sent emails trapped in spam filters.

    So far there have been very few gotchas, the issues that have come up were my own doing for over thinking the problem. Although once the family starts using the server I’m sure that I’ll need to iterate. My only real fear is loosing data and I’ve been able to test that a few times by completely tearing down the stack and rebuilding and reloading from a backup.

    Next step is I’ll be working on importing the families emails into the server and mailcow uses imapsync under the hood and actually getting some real traffic beyond myself and my emails.