Hello there. My name is Elliot Clowes.
This is a blog mostly about technology.
You can stay updated via RSS.

From CDs to AI: Media companies keep undervaluing their content

In 2002 the music industry was in a slump. Declining CD sales and rampant piracy had eaten into profits and labels were desperate.

Steve Jobs saw an opportunity. Apple had launched the iPod the year prior and he wanted a digital music store to pair with it. So meetings were held with record labels and a deal was struck with the five major labels – Universal, Sony, Warner, EMI, and BMG.

But in their piracy panic and desire to get profits back the labels undersold themselves. Firstly, they gave Apple control over the pricing. iTunes could sell songs individually, at $0.99. Users didn’t have to buy the full $9.99 album anymore.

But more importantly they simply underestimated how big digital music would become and sold the rights for what in hindsight would be a fraction of their true value.

In 2007 a similar story unfolded when Netflix launched their streaming platform. When they went to studios and networks about buying digital rights they were more than pleased to sell them at a low rate. To them it was free money. Streaming was seen as a perpetual secondary market to DVD sales and cable TV. And most of the rights Netflix wanted were for back catalog movies and TV shows. Content that didn’t produce much income anyway.

Hollywood made the same mistake the music industry made. They didn’t realise the value of what they had because they underestimated what a new technology – digital streaming – would become.

Web publishers are now in a similar position. Their profits have been dwindling for years and they’re searching for new revenue streams.

So when the AI companies that had been scooping up their content for free started getting $80 billion valuations the web publishers wanted their piece.

Just like the record labels, they wanted the ‘piracy’ – AI web crawlers scooping up anything and everything free of charge – to be stopped. So deals have started being made. The Associated Press, News Corp, the Financial Times, reddit, Stack Overflow and Vox and more have all done deals.

But the question remains: have they underestimated the value of their content, just as the music industry and Hollywood did before them? It’s tempting to think that with the current mania for AI, a $250 million deal over five years might be a win for a publisher like News Corp.1

But history tells me to doubt it. Media companies rarely value their content accurately in the face of new technologies. My bet is on the AI companies knowing the true worth of this data and the publishers are selling them the very content they need to eventually supersede them. Publishers are giving away the keys to their own kingdom.


  1. Disclosure: I work for a News Corp subsidiary. ↩︎



Apple's Vision Pro lacks a killer app

Financial Times°:

The lack of a “killer app” to encourage customers to pay upwards of $3,500 for an unproven new product is seen as a problem for Apple.

Apple said recently that there were “more than 2,000” apps available for its “spatial computing” device, five months after it debuted in the US.

That compares with more than 20,000 iPad apps that had been created by mid-2010, a few months after the tablet first went on sale, and around 10,000 iPhone apps by the end of 2008, the year the App Store launched.

The iPhone had third-party apps that made me want really want one. Angry Birds, Flipboard, Evernote, Shazam, Reeder, RunKeeper were all apps I couldn’t wait to try. There isn’t any for Vision Pro.

And that’s a problem. Especially when the iPhone 3G costed £200 (£358 in todays money). Vision Pro costs close to 10x that at £3,500.

Early data suggests that new content is arriving slowly. According to Appfigures, which tracks App Store listings, the number of new apps launched for the Vision Pro has fallen dramatically since January and February.

It certainly feels like there was some momentum for app releases, but that momentum feels a lot less now. You worry it’s going to come to a standstill.

Nearly 300 of the top iPhone developers, whose apps are downloaded more than 10mn times a year — including Google, Meta, Tencent, Amazon and Netflix — are yet to bring any of their software or services to Apple’s latest device.

Forget about innovative and wonderful apps from indie developers, Vision Pro doesn’t have available the behemoth ‘default’ apps you’d expect, like Netflix.

I get the sense that developers are tired of Apple’s 30% cut and strict app store rules and they’re showing them the finger. And it’s a finger that’s easy to raise when there’s so few Vision Pro’s out there.

Will the Vision Pro be a success? I don’t own one. But I got to briefly use one at work. It was magical. But there wasn’t much to do. And it was big and heavy. But give it five years and it will be lighter and there should be more apps.

It still has the has the anti-social issue that all current VR headsets have. But I’m hopeful, and believe that with time it will be added to the pantheon of devices you really need to own, alongside the computer and phone.

Or maybe it will forever remain a nice-to-have, like the Apple Watch. Who knows.



From Texas to Kenya: Biden's Ambitious Semiconductor Strategy

The New York Times° reports on the Biden administration’s efforts to reshape the global semiconductor supply chain:

If the Biden administration had its way, far more electronic chips would be made in factories in, say, Texas or Arizona.

They would then be shipped to partner countries, like Costa Rica or Vietnam or Kenya, for final assembly and sent out into the world to run everything from refrigerators to supercomputers.

The US government wants to transform the world’s chip supply chain. It’s a two-pronged approach: lure foreign companies to set up shop in the States, and then find partner countries to handle the final assembly.

The goals are clear: blunt China’s growing influence in the semiconductor industry, reduce supply chain risks, and create jobs on home soil. It’s not just about chips either – they’re aiming to do the same with green tech like EV batteries and solar panels.

The numbers are impressive. Over $395 billion in semiconductor manufacturing investment and $405 billion in green tech and clean power have been attracted to the US in the past three years.

But it’s still going to be tough. East Asia still has the edge in cutting-edge tech, skilled workers, and lower costs. Taiwan alone produces more than 60% of the world’s chips and nearly all of the most advanced ones.

And the US semiconductor industry is facing a potential shortage of up to 90,000 workers in the next few years.

One of the most intriguing parts of this whole endeavour is the countries being brought into the fold. Costa Rica, Indonesia, Mexico, Panama, the Philippines, Vietnam, and soon Kenya. Not exactly the first places that spring to mind when you think “high-tech manufacturing”.

And if these efforts pay off, the US share of global chip manufacturing could rise from 10% to just 14% by 2032 – according to one report. Not exactly world domination. But it’s a start, I suppose.



The AI data gold rush meets its match: Cloudflare

TechCrunch°:

Cloudflare, the publicly traded cloud service provider, has launched a new, free tool to prevent bots from scraping websites hosted on its platform for data to train AI models.

“Customers don’t want AI bots visiting their websites, and especially those that do so dishonestly,” the company writes on its official blog. “We fear that some AI companies intent on circumventing rules to access content will persistently adapt to evade bot detection.”

Cloudflare’s stepping into the AI scraping fray with a new tool to block sneaky bots. The tool uses machine learning (ironically) to spot AI bots trying to masquerade as regular users.

It’s a timely move, given the recent kerfuffle over AI companies like Perplexity° playing fast and loose with web scraping ethics.

AI companies really need to start being more respectful of content creators. Because I can feel the tide turning against them. More and more people and companies who publish on the web are becoming anti-AI.

After the story broke about Perplexity not respecting robots.txt° it felt like loads of people started thinking about how to block AI web crawlers for the first time. Before that they hadn’t even thought about it.

Cloudflare’s tool might help. But the real solution needs to come via the AI industry taking a long, hard look at its data practices and quite simply, not being dicks.



SoundCloud doesn't let you fast forward without signing in

In the grand tradition of web hostility, SoundCloud has made a bold move.

They’ve decided that your time isn’t valuable. That your experience doesn’t matter.

Want to skip ahead 30 seconds in a podcast? Sorry, you’ll need to sign in for that privilege.

It’s essentially a throwback to the days of linear radio. No control. No choice. Just sit there and take it.

How many listeners will try to skip, hit the sign-in wall, and never return? It’s a textbook example of prioritising metrics over user experience.

I get it. They want more sign-ups. They’re chasing those “monthly active user” numbers.

But in the race for engagement they’ve forgotten the most important engagement of all – the one between the listener and the content they love.

If your sign-up growth strategy involves frustrating users, it’s time to rethink your strategy.



'Why AI can’t replace science'

FastCompany (Gary Smith):

Today, AI is being increasingly integrated into scientific discovery to accelerate research, helping scientists generate hypotheses, design experiments, gather and interpret large datasets, and write papers. But the reality is that science and AI have little in common and AI is unlikely to make science obsolete. The core of science is theoretical models that anyone can use to make reliable descriptions and predictions.

The core of AI, in contrast, is, as Anderson noted, data mining: ransacking large databases for statistical patterns.

The hype around AI replacing science is getting a bit out of hand. This article does a cracking job of puncturing that bubble a bit.

The core argument is spot on: science is about building theoretical models that anyone can use to make reliable predictions. AI, on the other hand, is just glorified data mining - finding patterns without necessarily understanding why they exist.

It’s not that AI isn’t useful in science - it clearly is. But it’s a tool, not a replacement for the scientific method. The real test is whether AI actually leads to new products and services being developed faster and cheaper. So far, the evidence is pretty thin on the ground.

The most telling quote comes from the CEO of an AI-powered drug company: “People are saying, AI will solve everything. They give you fancy words. We’ll ingest all of this longitudinal data and we’ll do latitudinal analysis. It’s all garbage. It’s just hype.”

AI might be changing the world, but let’s not get carried away. Science isn’t going anywhere.



Companies are cooling on the cloud

BBC News:

This year, software firm 37signals will see a profit boost of more than $1m (£790,000) from leaving the cloud.

“To be able to get that with such relatively modest changes to our business is astounding,” says co-owner and chief technology officer, David Heinemeier Hansson (DHH). “Seeing the bill on a weekly basis really radicalised me.”

37signals, the company behind Basecamp and Hey, has moved away from cloud services and seen a significant boost to their bottom line as a result.

For 37signals, owning hardware and using a shared data centre has proven substantially cheaper than renting cloud resources. But cost isn’t the only factor at play. DHH also raises concerns about the internet’s resilience when so much of it relies on just three major cloud providers.

This trend isn’t limited to 37signals. The BBC reports that 94% of large US organisations have repatriated some workloads from the cloud in the last three years, claiming issues like security, unexpected costs, and performance problems.

And if you’re a company using a lot of storage and bandwidth the cloud can be incredibly expensive. There’s a reason Netflix and Dropbox use AWS for things like metadata, but use their own servers for large files.

That said, cloud computing obviously isn’t going anywhere. The key takeaway comes from Mark Turner at Pulsant:

“The change leaders in the IT industry are now the people who are not saying cloud first, but are saying cloud when it fits. Five years ago, the change disruptors were cloud first, cloud first, cloud first.”

It seems we’re moving towards a more nuanced approach. The future might not be all-cloud or all on-premises, but a mix of both. A sensible evolution I’d say.



Figma's pulls its AI tool after it got caught copying Apple's Weather app

Figma’s had to pull its new AI-powered app design tool after it started churning out clones of Apple’s weather app.

The ‘Make Design’ feature was quickly called out by someone on Twitter, showing the AI’s ‘original’ designs were dead ringers for Apple’s Weather app.

Figma CEO Dylan Field owned up to the blunder:

“Ultimately it is my fault for not insisting on a better QA process for this work and pushing our team hard to hit a deadline.”

It’s another reminder that AI-generated content is always a remix of its training data. But more often than not that ‘remix’ can be really be essentially a copy.

Figma reckons designers need new tools to “explore the option space of possibilities”. Let’s hope those tools can come up with something more original than a weather app that’s already on millions of iPhones.

404 Media has the full story.



Meta's AI image labeling: 'Made with AI' becomes 'AI Info'

Meta’s having a bit of a wobble with its AI labelling. They’ve gone from “Made with AI” to “AI info” after photographers got a bit miffed about their regular photos being tagged as AI-generated. Apparently even basic editing tools were triggering the label.

The new tag’s supposed to be clearer, indicating that an image might have used AI tools in the editing process, rather than implying it’s entirely AI-generated.

But it’s still using the same detection tech, so if you’ve used something like Adobe’s Generative AI Fill, you might still get slapped with the label.

The whole thing’s a bit of a mess, really. We’ve got social networks trying to label AI content to inform users, editing tool makers adding AI features willy-nilly, and photographers caught in the middle doing their best to straddle the line between originality and AI.

It’s a classic case of technology outpacing policy.

TechCrunch has the full story.



Small Sites Want Analytics Too

Like a lot of bloggers I have a small, but quiet audience. So I’m a fan of using analytics to see who’s visiting my site. It’s a delight to discover the various corners of the globe that have stumbled upon my writings.

However, most analytics tools don’t cater to this niche market. Google Analytics (GA) is the behemoth of tracking – it’s free but overkill, privacy-invading, and has a confusing web interface. Other choices are limited and often expensive, charging £10-£20/month, which isn’t justifiable for many small bloggers like myself.

As a result I simply haven’t used or cared about analytics for many years. The last time I regularly used one was when Mint was still alive.

That’s why I was thrilled to discover Tinylytics. Their free plan offers 1,000 page hits/month, which is perfect for many bloggers. And if you need more, their paid plan is a very reasonable $5/month – a price I’d gladly pay.

And one of the best features is that you can track up to 5 sites on the free plan and unlimited sites on the paid plan. As a web tinkerer with multiple small sites, this is a game-changer for me.

Also I love the page that explains why they offer a free plan, as it pretty much sums up what I’ve been saying:

A lot of analytics software is too expensive. Period. Heck, I just started a small side project or a personal site and I don’t want to shell out $9 - $14 per month just for analytics that looks pretty.
[…]
There are free options from big providers, but guess what… they’re probably using your data to better meet their own needs and most likely advertisers.
[…]
Having a free plan, from someone that deeply cares, and from an individual, not a huge corporate or venture funded company, is the best start you can give yourself without worrying what will happen with your data. It sits on my server, and is backed up hourly to an offsite encrypted backup. That’s it. Oh and you won’t break the bank either. I think that’s a win win.

If you’re a small blogger looking for an affordable, privacy-focused, and user-friendly analytics solution, I highly recommend giving Tinylytics a try.



Newsletters are the new blogs. And that's a good thing.

I used to be a newsletter hater. My email inbox is a wasteland of work, spam and things I don’t care about. It’s not the place I go to when I want to be entertained or delighted. And why would I use email when I have RSS?

For those that don’t know, an RSS ‘feed’ is essentially a plain text version of a blog that an RSS ‘reader’ will then process and nicely display for you. It’s an ad-free, dedicated reading place with no tracking, offline functionality (once synced), and customisable font size, text width, etc.

It’s great. And during the heyday of blogging it was a popular way to read blogs as you didn’t have to visit a site to get new posts. But when Google Reader, the most popular RSS reader, shut down in 2013, it effectively killed off RSS for mainstream users. Its usage has been declining ever since, and blogging declined with it.

Meanwhile social media rose and people shifted from writing on blogs to Twitter. Gone were the days of a chronological list of blog posts, neatly organised in folders, and in its place was an endless feed, organised by opaque algorithms designed to maximise engagement at any cost. It was sad.

So when I started comparing newsletters as an alternative to social media rather than a replacement for RSS I began to see them more fondly – and even root for them. Because to encourage people to consume higher-quality writing and spend less time on social media, there needs to be a good, easy alternative. RSS isn’t it. Email is.

In many ways, email is similar to an RSS reader. Both have read/unread flags, folders/labels, less ads and tracking compared to the web, and customisable font sizes if you’re using an email client.

Then there’s the matter of writers getting paid. For years writers struggled to make money on the web. They could maybe make a bit of money via ads, sponsored posts or membership schemes. But they needed a lot more than 1000 true fans support themselves because there wasn’t a system or a culture for those fans to pay them. Email newsletters solve this problem, as every newsletter platform allows writers to charge subscribers. And with the rise of Substack and paid newsletters in general, people are more accustomed to paying.

Older web users like myself may still pine for the RSS glory days and look down on newsletters and email as a poor alternative. But the fact is they are a practical way for people to read and a viable way for writers to find an audience and get paid for their work.



How-to: Instantly make Google minimal and ad-free

A simple trick that gives you the minimal Google search results of old — no ads, no ‘People Also Ask’ boxes, just a clean list of links.

You just need to add &udm=14 to the end of your Google search URL.

Though you don’t want to do that manually each time obviously. So in your browser create a ‘custom’ search engine and make it your default.

The URL you need to use is: https://www.google.com/search?q=%s&udm=14

Here’s how to add custom search engines for:

via Tedium.co



Gemini 1.5 Pro accepts 1 million token prompts

Every.to° (Dan Shipper):

I got access to Gemini Pro 1.5 this week, a new private beta LLM from Google that is significantly better than previous models the company has released. (This is not the same as the publicly available version of Gemini that made headlines for refusing to create pictures of white people. That will be forgotten in a week; this will be relevant for months and years to come.)

Somehow, Google figured out how to build an AI model that can comfortably accept up to 1 million tokens with each prompt. For context, you could fit all of Eliezer Yudkowsky’s 1,967-page opus Harry Potter and the Methods of Rationality into every message you send to Gemini.

1 million tokens is insane (tokens = words (kind of)). For context, OpenAI’s GPT-4 Turbo can accept 32,000 tokens.

To be fair, these days I run into GPT-4’s token limit rarely. My prompts aren’t that big. But a 1 million tokens opens up a new world. The author of the article says you could send Gemini a 2,000 page book as a prompt. And I could see myself using it for that use-case. Often I remember I read something but can’t find the passage. I could copy and paste the full book text from a .mobi file and ask Gemini for help.

I think it would also be useful for my notes. They’re all individual .md text files. But I’m sure there’s a tool out there that could combine them into one big file. And then I could send it to Gemini and ask questions.



The Disappearing Hum of Fans

These days I work in a big, open plan office with no assigned desks. So I’ve sat next to a lot of people. But more importantly I’ve set next to a lot of computers.

And I realised today that I’m yet to hear a single computer fan. Either because the laptop is an entirely fanless MacBook or a modern Windows one that has a quiet or rarely spun-up fan. It’s all so quiet out there. No more soft hums as a computer starts to work up a sweat. No more jet engine-like screeches as it hits high load. The only noise my office features these days is the clickity clack of keyboards, the chittity chat of people and the croaky coughs of winter flu.

Though there is one person still flying high the flag of the fan. And that’s me. I have a Intel based MacBook Pro with a fan that spins up often. But I feel like a bit of a dinosaur having a fan – it feels like coming into work with a typewriter.

I knew fans were officially finished when I had two people sat next to me ask “what’s that noise?!” with a confused look on their face when they heard my laptop. The sound of a fan was such a distant memory to them that they couldn’t even recognise it any more.

And fans should be a distance memory. A computer without a fan is quieter, cooler and simpler. But there will always be a part of me that’s nostalgic for them. I quite like hearing them whir up as a CPU load increases. There’s a pleasant mechanical quality to it (the constant spinning akin to the constant ticking of a mechanical watch). It’s an audible link to how hard your computer is working. And I will miss that sound when I eventually upgrade to a modern MacBook.



Lessons learned from making the front page of Hacker News and /r/technology

(AKA the legally required ‘my WordPress blog went down after making Hacker News’ post-mortem post)

A few days ago one of my posts made the front page of both Hacker News and /r/technology. It was a bit of a surprise to me. But it was an even bigger surprise to my $5 Linode VPS, which quickly collapsed under the strain.

Thankfully I was home and blogging when it went down, so I noticed right away and quickly tried to diagnose why on earth PHP was using so much CPU. But the idea that it was due to a traffic influx never even crossed my mind. So I spent the first 30 minutes poking around and troubleshooting.

Eventually though, I thought to myself what if it was due to traffic? So off I went to my Cloudflare dashboard. And sure enough, traffic!

But of course once again my silly self struck again and I didn’t think for a single moment that it was genuine, organic, human visits on my read-by-a-dozen-people blog. Nope, in my head this of course had to be a DDoS attack – it was the only logically answer! So I turned on Cloudflare’s “I’m Under Attack” mode and left it at that.

As I sat there though, I pondered what if I was perhaps Fireballed or something? (And this all had to be just pondered because at this point I had no analytics on my blog. It was good for the privacy of my twelve readers. But not so good when I was trying to work out where a load of traffic was coming from). So, I went to daringfireball.net. But nope, I hadn’t been linked to on there. Mmm.

I then rather pathetically typed my blog into Google News, seeing if that might shine any light. But nope.

Hacker News maybe? And… cook a cat! My latest post at the tippy top. Lots of actual humans on my blog! Panic mode engaged.

And also, how do I fix my downed blog?!

Well, it’s complicated by this blog being used to being powered by WordPress. Self-hosted WordPress blogs have a long and storied tradition of going down after making Hacker News – a tradition my blog shamefully continued. In a world of increasingly static sites which survive any amount traffic, the LAMP based WordPress blog – which goes down as easily as Twitter in 2008 – does look a tad dinosauric and inefficient in comparison.

Thankfully in the end though, fixing the problem (at least temporarily) was actually fairly simple thanks to the fluidity of the cloud. I just threw horsepower at the problem and simply resized my server (Linode did this in just over two minutes, which I found rather impressive).

Well, it’s now a week or so on. And now that the dust has settled, here are some lessons and curious things I discovered after making the front page of /r/technology and Hacker News.

How much traffic does /r/technology and Hacker News send?

This is a little tough to know exactly as during the first hour of being on HN my blog was mostly down. But by the looks of it – and rather surprisingly – HN sent many more visitors than /r/technology, a subreddit with over 11 million readers (though to be fair, my post made the top of HN and only sixth on reddit). Here was the number of unique visitors on day one:

  • Hacker News: circa 38,000+
  • /r/technology: 19,771

The majority of traffic is phone traffic

I’m sure this comes as no surprise to people who often look at the analytics of their website. But as someone who has only done so for the first time it certainly came as a surprise to me that 69% of the visitors I got were on their phone.

Whenever I get the yearly itch to redesign the look of this blog I do of course always ensure it looks okay on a phone, but it’s an afterthought for the most part. My priority instead is how it looks on a desktop. But apparently I’ve got this backwards. Nowadays the focus should evidently be very much be on the smart phone, as it’s how the majority of visitors will experience the site.

Is Cloudflare worth it?

As the owner of a rarely visited blog, having my site run through Cloudflare felt like storing a pencil sharpener in a shipping container. So it was fun to see it actually be called into action and have proper traffic to deliver.

Cloudflare did let me down in some ways. Its “Always Online” feature didn’t save the day. It’s supposed to show a cached copy of my blog if the server goes down. But it apparently relies on the Internet Archive’s slow-to-crawl Wayback Machine for this (no hate, Internet Archive! You’re an amazing free service and one of the best corners of the web and I love you). And as the blog post that made Hacker News was only a day or so old there wasn’t a cached copy to serve. Which was a shame.

But when it came to caching and serving files Cloudflare did very well, with a 95% hit rate. Of the 107 GB of bandwidth sent out, my origin server handled just 4 GB of it. CloudFlare did the rest. And I’m sure they delivered it all far faster to the non-European visitors than my London-based Linode would have.

And the thing is, I titled this section ‘is Cloudflare worth it?’ But like most individuals, I’m not on the paid plan – just the free tier. So yes, Cloudflare is very much worth it. Aside from a few lock-in concerns and its tendency to present too many CAPTCHAs to genuine visitors, in my mind it continues to be almost a requirement for any website to be proxied through Cloudflare. It’s a remarkable service and tool.

What sort of server do you need for your WordPress blog to survive a Hacker News onslaught?

Do you run a self-hosted Wordpress blog and want it to stay up and running if you make Hacker News and /r/technology? Well, it looks like a $5/mo VPS isn’t going to be enough.

My blog was hosted on a VPS with 1 vCPU and 1 GB RAM when it went down. When things went south I upgraded to a $40/mo one with 4 vCPU and 8 GB RAM, which proved to be overkill – though I did want to guarantee no more downtime.

So how much compute do you need? Well I spent a lot of my time anxiously monitoring htop throughout all this. And from the looks of it the minimum requirement if you want to even stand a chance of your WordPress blog surviving a Hacker News beating is 2 vCPU with 2GB of RAM.

And this is presuming you have a WordPress caching plugin installed and CloudFlare handling the vast majority of static files. It also presumes you’re using Linode, who have pretty high-end CPUs (AMD EPYC 7601’s in my servers case). If you’re using an alternative with a less beefy CPU - like Digital Ocean or Vultr - you might need more than 2 vCPU to be safe.

You should also harden your WordPress site

It’s important to prepare your site for a potential influx of visitors. I know, I know, no one reads your blog. No one reads mine either. But for one day they did. And it was rather embarrassing when it immediately melted.

So be over prepared. If you self-host WordPress, you need to take it a little seriously, and some steps are likely required:

  • Install a caching plugin. There are many. I use WP Super Cache. You don’t stand a chance without one.
  • Use Cloudflare. It’s a very useful tool. It caches static content and speeds up your blog, as well as protecting you from bots.
  • If your blog does go down, you’ll want to know. UptimeRobot can check every five minutes and email you if there are any problems - and all for free.
  • Go overkill when it comes to hosting. I just presumed that because I wasn’t using some over-sold shared hosting service that my blog would be able to handle a sudden influx of traffic. But I was wrong. If I was just linked to by a fairly popular blog I might have been okay. But Hacker News and reddit is a different kettle of fish, and your site will likely go down if you end up being featured there. So sadly you’re just going to have to stump up the cash and pay the roughly $20/mo for a really good server if you want to self-host WordPress and survive a Hacker News-sized influx. That’s just the WordPress penalty I’m afraid.

With all that being said… bye-bye WordPress

After all this bother I’ve actually decided to do the cliche thing and say goodbye to WordPress and instead go the simple, static route. This blog is now powered by Hugo and hosted on Amazon S3. And thank goodness I no longer have to worry about MySQL databases, spammy plugins or wp-login.php attacks from Ukrainian hackers.

Finally

Finishing up. Overall it was really rather fun making Hacker News and /r/technology. And despite there being over 1,200 comments submitted across the two sites (apparently people like talking about their hatred for ads almost as much as they like talking about their hatred for Netflix subscription price increases), basically zero were mean to me – which was a nice surprise.

Also, hello to the new people who now follow the blog! Glad to have you here. Expect two posts a year :/


And one final note for older readers who subscribed via RSS: the RSS feed URL is no longer https://imlefthanded.com/feed/. It is now https://imlefthanded.com/index.xml. Apologies for the annoyance, but it’s probably best to update the URL for this blog in your feed reader of choice. Or you can get new updates via Twitter if you prefer. Thanks!





You can find even more posts in the archive.