aboutsummaryrefslogtreecommitdiff
path: root/_posts/2023-07-24-purge.md
diff options
context:
space:
mode:
authorBradley Taunt <bt@btxx.org>2024-01-22 13:06:19 -0500
committerBradley Taunt <bt@btxx.org>2024-01-22 13:06:19 -0500
commitd2e4da10c806d815eded44ade076babb78802c16 (patch)
tree7494261e22f3255926204164449c7345f5b500e5 /_posts/2023-07-24-purge.md
Initial commit to new cgit platform
Diffstat (limited to '_posts/2023-07-24-purge.md')
-rw-r--r--_posts/2023-07-24-purge.md46
1 files changed, 46 insertions, 0 deletions
diff --git a/_posts/2023-07-24-purge.md b/_posts/2023-07-24-purge.md
new file mode 100644
index 0000000..a4b7da7
--- /dev/null
+++ b/_posts/2023-07-24-purge.md
@@ -0,0 +1,46 @@
+---
+title: "Purging the 1MB Club"
+layout: post
+summary: "I finally got around to testing and purging 1MB Club member websites"
+---
+
+This project has been running for almost 3 years now, and I still enjoy adding more members to the club! The only issue is that I have never been great at ensuring the quality of the members remained consistent. This included removing dead links, sites that now forward to less-than-safe domains, etc.
+
+That all changes today!
+
+## The First Purge
+
+I wrote up a very crude script that checks the status of each existing club member's URL and flags those that error-out. After running the check I found I could remove over 60 websites that were either dead or broken.
+
+> **Note:** If you happen to notice your website has been incorrectly caught in the "purge-crossfire", then don't hesitate to shoot me an email letting me know! I'm only human after all.
+
+## Checking URL Script
+
+This `ruby` script is far from perfect but it works well for my own personal workflow. Feel free to steal this and tweak it for your own purposes as you see fit!
+
+```
+require 'httparty'
+require 'nokogiri'
+
+HTTParty::Basement.default_options.update(verify: false)
+
+response = HTTParty.get('https://1mb.club')
+
+document = Nokogiri::HTML(response.body)
+website_urls = document.css("#container tr")
+
+puts "Scanning website members URLs..."
+website_urls.each do |single_site|
+ begin
+ url = single_site.css("a.site").first.attribute("href").value
+ response = HTTParty.get(url, timeout: 4)
+ puts "Checking: " + url
+ rescue Exception
+ puts "<!-------- ERROR: " + url
+ end
+end
+```
+
+## More Posts to Come
+
+I have been slacking in terms of web-related content on this blog. My plan is to improve that and I even have a few outlines ready to go. I'll be sure to keep you all posted when something new is published!