Hello there. My name is Elliot Clowes.
This is a blog mostly about technology.
You can stay updated via RSS.

Foursquare ends its restaurant app, keeps check-ins

Foursquare split into two apps in 2014, and now they’re killing one of them°.

What you need to know:

Swarm remaining alive is good news for me — I use it daily to track where I’ve been. It builds a nice personal location history without any effort.

The City Guide app had value. When traveling it often pointed me to restaurants that weren’t in the top Google or TripAdvisor results — useful when struggling to book a table.

Foursquare as a company isn’t going anywhere. They’ve transformed into a location data provider, powering features in apps like Snapchat and Uber. The City Guide shutdown just makes that B2B shift a bit more official.



Share files in your own Permanent Public Folder

You should have your own ‘Permanent Public Folder’ folder for files you want to share.

Don’t use something like Dropbox or Imgur. They might work now. But eventually they won’t.

Instead, have them hosted in a folder on a domain you own.

And then:

  • Never change the domain.
  • Never delete/rename/move a file.

In my case, I use elliotclowes.com. It’s not the shortest or coolest domain I own. But I’m never going to get rid of it.

I use the folder /cold. It doesn’t make much sense. Cold storage is storage that’s not frequently accessed and is often stored on offline drives or CDs. But it makes sense to me, and I know to never delete it or touch it. Use whatever works for you.

Avoid using subfolders. There’s too much temptation to then move or sort files later on. I originally put files in yearly subfolders based on the year I uploaded them. I don’t anymore. But I won’t move anything. I’ve done it now and I’m not going to change it. Remember: never delete/rename/move a file.

Also avoid checking the folder. You’ll inevitably see a file called UNADJUSTEDNONRAW_thumb_69b5.jpg that is an image of a grape and be tempted to delete it.

I admit, this isn’t the right solution for everyone.

If you’re sharing large video files, the storage and bandwidth costs might be too high.

When you share a file with something like Dropbox, it will often nicely copy the link to your clipboard. That won’t happen here.

Uploading from your phone can be annoying – though there are now plenty of apps for uploading to SFTP servers or S3.

It can even be annoying on a computer, to be fair. But software like ExpanDrive and Mountain Duck will let you access your server from the file system. Then you can just drag and drop files.

You could also ask an LLM like GPT-4 to create a shell script for you to upload the files. That’s what I do. When Hazel sees a new file in the folder it runs a shell script that uses the AWS CLI to upload it (here’s my .sh file).



From CDs to AI: Media companies keep undervaluing their content

In 2002 the music industry was in a slump. Declining CD sales and rampant piracy had eaten into profits and labels were desperate.

Steve Jobs saw an opportunity. Apple had launched the iPod the year prior and he wanted a digital music store to pair with it. So meetings were held with record labels and a deal was struck with the five major labels – Universal, Sony, Warner, EMI, and BMG.

But in their piracy panic and desire to get profits back the labels undersold themselves. Firstly, they gave Apple control over the pricing. iTunes could sell songs individually, at $0.99. Users didn’t have to buy the full $9.99 album anymore.

But more importantly they simply underestimated how big digital music would become and sold the rights for what in hindsight would be a fraction of their true value.

In 2007 a similar story unfolded when Netflix launched their streaming platform. When they went to studios and networks about buying digital rights they were more than pleased to sell them at a low rate. To them it was free money. Streaming was seen as a perpetual secondary market to DVD sales and cable TV. And most of the rights Netflix wanted were for back catalog movies and TV shows. Content that didn’t produce much income anyway.

Hollywood made the same mistake the music industry made. They didn’t realise the value of what they had because they underestimated what a new technology – digital streaming – would become.

Web publishers are now in a similar position. Their profits have been dwindling for years and they’re searching for new revenue streams.

So when the AI companies that had been scooping up their content for free started getting $80 billion valuations the web publishers wanted their piece.

Just like the record labels, they wanted the ‘piracy’ – AI web crawlers scooping up anything and everything free of charge – to be stopped. So deals have started being made. The Associated Press, News Corp, the Financial Times, reddit, Stack Overflow and Vox and more have all done deals.

But the question remains: have they underestimated the value of their content, just as the music industry and Hollywood did before them? It’s tempting to think that with the current mania for AI, a $250 million deal over five years might be a win for a publisher like News Corp.1

But history tells me to doubt it. Media companies rarely value their content accurately in the face of new technologies. My bet is on the AI companies knowing the true worth of this data and the publishers are selling them the very content they need to eventually supersede them. Publishers are giving away the keys to their own kingdom.


  1. Disclosure: I work for a News Corp subsidiary. ↩︎



Apple's Vision Pro lacks a killer app

Financial Times°:

The lack of a “killer app” to encourage customers to pay upwards of $3,500 for an unproven new product is seen as a problem for Apple.

Apple said recently that there were “more than 2,000” apps available for its “spatial computing” device, five months after it debuted in the US.

That compares with more than 20,000 iPad apps that had been created by mid-2010, a few months after the tablet first went on sale, and around 10,000 iPhone apps by the end of 2008, the year the App Store launched.

The iPhone had third-party apps that made me want really want one. Angry Birds, Flipboard, Evernote, Shazam, Reeder, RunKeeper were all apps I couldn’t wait to try. There isn’t any for Vision Pro.

And that’s a problem. Especially when the iPhone 3G costed £200 (£358 in todays money). Vision Pro costs close to 10x that at £3,500.

Early data suggests that new content is arriving slowly. According to Appfigures, which tracks App Store listings, the number of new apps launched for the Vision Pro has fallen dramatically since January and February.

It certainly feels like there was some momentum for app releases, but that momentum feels a lot less now. You worry it’s going to come to a standstill.

Nearly 300 of the top iPhone developers, whose apps are downloaded more than 10mn times a year — including Google, Meta, Tencent, Amazon and Netflix — are yet to bring any of their software or services to Apple’s latest device.

Forget about innovative and wonderful apps from indie developers, Vision Pro doesn’t have available the behemoth ‘default’ apps you’d expect, like Netflix.

I get the sense that developers are tired of Apple’s 30% cut and strict app store rules and they’re showing them the finger. And it’s a finger that’s easy to raise when there’s so few Vision Pro’s out there.

Will the Vision Pro be a success? I don’t own one. But I got to briefly use one at work. It was magical. But there wasn’t much to do. And it was big and heavy. But give it five years and it will be lighter and there should be more apps.

It still has the has the anti-social issue that all current VR headsets have. But I’m hopeful, and believe that with time it will be added to the pantheon of devices you really need to own, alongside the computer and phone.

Or maybe it will forever remain a nice-to-have, like the Apple Watch. Who knows.



From Texas to Kenya: Biden's Ambitious Semiconductor Strategy

The New York Times° reports on the Biden administration’s efforts to reshape the global semiconductor supply chain:

If the Biden administration had its way, far more electronic chips would be made in factories in, say, Texas or Arizona.

They would then be shipped to partner countries, like Costa Rica or Vietnam or Kenya, for final assembly and sent out into the world to run everything from refrigerators to supercomputers.

The US government wants to transform the world’s chip supply chain. It’s a two-pronged approach: lure foreign companies to set up shop in the States, and then find partner countries to handle the final assembly.

The goals are clear: blunt China’s growing influence in the semiconductor industry, reduce supply chain risks, and create jobs on home soil. It’s not just about chips either – they’re aiming to do the same with green tech like EV batteries and solar panels.

The numbers are impressive. Over $395 billion in semiconductor manufacturing investment and $405 billion in green tech and clean power have been attracted to the US in the past three years.

But it’s still going to be tough. East Asia still has the edge in cutting-edge tech, skilled workers, and lower costs. Taiwan alone produces more than 60% of the world’s chips and nearly all of the most advanced ones.

And the US semiconductor industry is facing a potential shortage of up to 90,000 workers in the next few years.

One of the most intriguing parts of this whole endeavour is the countries being brought into the fold. Costa Rica, Indonesia, Mexico, Panama, the Philippines, Vietnam, and soon Kenya. Not exactly the first places that spring to mind when you think “high-tech manufacturing”.

And if these efforts pay off, the US share of global chip manufacturing could rise from 10% to just 14% by 2032 – according to one report. Not exactly world domination. But it’s a start, I suppose.



The AI data gold rush meets its match: Cloudflare

TechCrunch°:

Cloudflare, the publicly traded cloud service provider, has launched a new, free tool to prevent bots from scraping websites hosted on its platform for data to train AI models.

“Customers don’t want AI bots visiting their websites, and especially those that do so dishonestly,” the company writes on its official blog. “We fear that some AI companies intent on circumventing rules to access content will persistently adapt to evade bot detection.”

Cloudflare’s stepping into the AI scraping fray with a new tool to block sneaky bots. The tool uses machine learning (ironically) to spot AI bots trying to masquerade as regular users.

It’s a timely move, given the recent kerfuffle over AI companies like Perplexity° playing fast and loose with web scraping ethics.

AI companies really need to start being more respectful of content creators. Because I can feel the tide turning against them. More and more people and companies who publish on the web are becoming anti-AI.

After the story broke about Perplexity not respecting robots.txt° it felt like loads of people started thinking about how to block AI web crawlers for the first time. Before that they hadn’t even thought about it.

Cloudflare’s tool might help. But the real solution needs to come via the AI industry taking a long, hard look at its data practices and quite simply, not being dicks.



SoundCloud doesn't let you fast forward without signing in

In the grand tradition of web hostility, SoundCloud has made a bold move.

They’ve decided that your time isn’t valuable. That your experience doesn’t matter.

Want to skip ahead 30 seconds in a podcast? Sorry, you’ll need to sign in for that privilege.

It’s essentially a throwback to the days of linear radio. No control. No choice. Just sit there and take it.

How many listeners will try to skip, hit the sign-in wall, and never return? It’s a textbook example of prioritising metrics over user experience.

I get it. They want more sign-ups. They’re chasing those “monthly active user” numbers.

But in the race for engagement they’ve forgotten the most important engagement of all – the one between the listener and the content they love.

If your sign-up growth strategy involves frustrating users, it’s time to rethink your strategy.



'Why AI can’t replace science'

FastCompany (Gary Smith):

Today, AI is being increasingly integrated into scientific discovery to accelerate research, helping scientists generate hypotheses, design experiments, gather and interpret large datasets, and write papers. But the reality is that science and AI have little in common and AI is unlikely to make science obsolete. The core of science is theoretical models that anyone can use to make reliable descriptions and predictions.

The core of AI, in contrast, is, as Anderson noted, data mining: ransacking large databases for statistical patterns.

The hype around AI replacing science is getting a bit out of hand. This article does a cracking job of puncturing that bubble a bit.

The core argument is spot on: science is about building theoretical models that anyone can use to make reliable predictions. AI, on the other hand, is just glorified data mining - finding patterns without necessarily understanding why they exist.

It’s not that AI isn’t useful in science - it clearly is. But it’s a tool, not a replacement for the scientific method. The real test is whether AI actually leads to new products and services being developed faster and cheaper. So far, the evidence is pretty thin on the ground.

The most telling quote comes from the CEO of an AI-powered drug company: “People are saying, AI will solve everything. They give you fancy words. We’ll ingest all of this longitudinal data and we’ll do latitudinal analysis. It’s all garbage. It’s just hype.”

AI might be changing the world, but let’s not get carried away. Science isn’t going anywhere.



Companies are cooling on the cloud

BBC News:

This year, software firm 37signals will see a profit boost of more than $1m (£790,000) from leaving the cloud.

“To be able to get that with such relatively modest changes to our business is astounding,” says co-owner and chief technology officer, David Heinemeier Hansson (DHH). “Seeing the bill on a weekly basis really radicalised me.”

37signals, the company behind Basecamp and Hey, has moved away from cloud services and seen a significant boost to their bottom line as a result.

For 37signals, owning hardware and using a shared data centre has proven substantially cheaper than renting cloud resources. But cost isn’t the only factor at play. DHH also raises concerns about the internet’s resilience when so much of it relies on just three major cloud providers.

This trend isn’t limited to 37signals. The BBC reports that 94% of large US organisations have repatriated some workloads from the cloud in the last three years, claiming issues like security, unexpected costs, and performance problems.

And if you’re a company using a lot of storage and bandwidth the cloud can be incredibly expensive. There’s a reason Netflix and Dropbox use AWS for things like metadata, but use their own servers for large files.

That said, cloud computing obviously isn’t going anywhere. The key takeaway comes from Mark Turner at Pulsant:

“The change leaders in the IT industry are now the people who are not saying cloud first, but are saying cloud when it fits. Five years ago, the change disruptors were cloud first, cloud first, cloud first.”

It seems we’re moving towards a more nuanced approach. The future might not be all-cloud or all on-premises, but a mix of both. A sensible evolution I’d say.



Figma's pulls its AI tool after it got caught copying Apple's Weather app

Figma’s had to pull its new AI-powered app design tool after it started churning out clones of Apple’s weather app.

The ‘Make Design’ feature was quickly called out by someone on Twitter, showing the AI’s ‘original’ designs were dead ringers for Apple’s Weather app.

Figma CEO Dylan Field owned up to the blunder:

“Ultimately it is my fault for not insisting on a better QA process for this work and pushing our team hard to hit a deadline.”

It’s another reminder that AI-generated content is always a remix of its training data. But more often than not that ‘remix’ can be really be essentially a copy.

Figma reckons designers need new tools to “explore the option space of possibilities”. Let’s hope those tools can come up with something more original than a weather app that’s already on millions of iPhones.

404 Media has the full story.



Meta's AI image labeling: 'Made with AI' becomes 'AI Info'

Meta’s having a bit of a wobble with its AI labelling. They’ve gone from “Made with AI” to “AI info” after photographers got a bit miffed about their regular photos being tagged as AI-generated. Apparently even basic editing tools were triggering the label.

The new tag’s supposed to be clearer, indicating that an image might have used AI tools in the editing process, rather than implying it’s entirely AI-generated.

But it’s still using the same detection tech, so if you’ve used something like Adobe’s Generative AI Fill, you might still get slapped with the label.

The whole thing’s a bit of a mess, really. We’ve got social networks trying to label AI content to inform users, editing tool makers adding AI features willy-nilly, and photographers caught in the middle doing their best to straddle the line between originality and AI.

It’s a classic case of technology outpacing policy.

TechCrunch has the full story.



Small Sites Want Analytics Too

Like a lot of bloggers I have a small, but quiet audience. So I’m a fan of using analytics to see who’s visiting my site. It’s a delight to discover the various corners of the globe that have stumbled upon my writings.

However, most analytics tools don’t cater to this niche market. Google Analytics (GA) is the behemoth of tracking – it’s free but overkill, privacy-invading, and has a confusing web interface. Other choices are limited and often expensive, charging £10-£20/month, which isn’t justifiable for many small bloggers like myself.

As a result I simply haven’t used or cared about analytics for many years. The last time I regularly used one was when Mint was still alive.

That’s why I was thrilled to discover Tinylytics. Their free plan offers 1,000 page hits/month, which is perfect for many bloggers. And if you need more, their paid plan is a very reasonable $5/month – a price I’d gladly pay.

And one of the best features is that you can track up to 5 sites on the free plan and unlimited sites on the paid plan. As a web tinkerer with multiple small sites, this is a game-changer for me.

Also I love the page that explains why they offer a free plan, as it pretty much sums up what I’ve been saying:

A lot of analytics software is too expensive. Period. Heck, I just started a small side project or a personal site and I don’t want to shell out $9 - $14 per month just for analytics that looks pretty.
[…]
There are free options from big providers, but guess what… they’re probably using your data to better meet their own needs and most likely advertisers.
[…]
Having a free plan, from someone that deeply cares, and from an individual, not a huge corporate or venture funded company, is the best start you can give yourself without worrying what will happen with your data. It sits on my server, and is backed up hourly to an offsite encrypted backup. That’s it. Oh and you won’t break the bank either. I think that’s a win win.

If you’re a small blogger looking for an affordable, privacy-focused, and user-friendly analytics solution, I highly recommend giving Tinylytics a try.



Newsletters are the new blogs. And that's a good thing.

I used to be a newsletter hater. My email inbox is a wasteland of work, spam and things I don’t care about. It’s not the place I go to when I want to be entertained or delighted. And why would I use email when I have RSS?

For those that don’t know, an RSS ‘feed’ is essentially a plain text version of a blog that an RSS ‘reader’ will then process and nicely display for you. It’s an ad-free, dedicated reading place with no tracking, offline functionality (once synced), and customisable font size, text width, etc.

It’s great. And during the heyday of blogging it was a popular way to read blogs as you didn’t have to visit a site to get new posts. But when Google Reader, the most popular RSS reader, shut down in 2013, it effectively killed off RSS for mainstream users. Its usage has been declining ever since, and blogging declined with it.

Meanwhile social media rose and people shifted from writing on blogs to Twitter. Gone were the days of a chronological list of blog posts, neatly organised in folders, and in its place was an endless feed, organised by opaque algorithms designed to maximise engagement at any cost. It was sad.

So when I started comparing newsletters as an alternative to social media rather than a replacement for RSS I began to see them more fondly – and even root for them. Because to encourage people to consume higher-quality writing and spend less time on social media, there needs to be a good, easy alternative. RSS isn’t it. Email is.

In many ways, email is similar to an RSS reader. Both have read/unread flags, folders/labels, less ads and tracking compared to the web, and customisable font sizes if you’re using an email client.

Then there’s the matter of writers getting paid. For years writers struggled to make money on the web. They could maybe make a bit of money via ads, sponsored posts or membership schemes. But they needed a lot more than 1000 true fans support themselves because there wasn’t a system or a culture for those fans to pay them. Email newsletters solve this problem, as every newsletter platform allows writers to charge subscribers. And with the rise of Substack and paid newsletters in general, people are more accustomed to paying.

Older web users like myself may still pine for the RSS glory days and look down on newsletters and email as a poor alternative. But the fact is they are a practical way for people to read and a viable way for writers to find an audience and get paid for their work.



How-to: Instantly make Google minimal and ad-free

A simple trick that gives you the minimal Google search results of old — no ads, no ‘People Also Ask’ boxes, just a clean list of links.

You just need to add &udm=14 to the end of your Google search URL.

Though you don’t want to do that manually each time obviously. So in your browser create a ‘custom’ search engine and make it your default.

The URL you need to use is: https://www.google.com/search?q=%s&udm=14

Here’s how to add custom search engines for:

via Tedium.co



Gemini 1.5 Pro accepts 1 million token prompts

Every.to° (Dan Shipper):

I got access to Gemini Pro 1.5 this week, a new private beta LLM from Google that is significantly better than previous models the company has released. (This is not the same as the publicly available version of Gemini that made headlines for refusing to create pictures of white people. That will be forgotten in a week; this will be relevant for months and years to come.)

Somehow, Google figured out how to build an AI model that can comfortably accept up to 1 million tokens with each prompt. For context, you could fit all of Eliezer Yudkowsky’s 1,967-page opus Harry Potter and the Methods of Rationality into every message you send to Gemini.

1 million tokens is insane (tokens = words (kind of)). For context, OpenAI’s GPT-4 Turbo can accept 32,000 tokens.

To be fair, these days I run into GPT-4’s token limit rarely. My prompts aren’t that big. But a 1 million tokens opens up a new world. The author of the article says you could send Gemini a 2,000 page book as a prompt. And I could see myself using it for that use-case. Often I remember I read something but can’t find the passage. I could copy and paste the full book text from a .mobi file and ask Gemini for help.

I think it would also be useful for my notes. They’re all individual .md text files. But I’m sure there’s a tool out there that could combine them into one big file. And then I could send it to Gemini and ask questions.





You can find even more posts in the archive.