There is a long-standing tradition of comparing science and technology to magic. Kurt Vonnegut, for instance, said that “science is magic that works.” But the most famous example is Arthur C. Clarke’s so-called third law, which goes:
Any sufficiently advanced technology is indistinguishable from magic.
Why would Clarke’s law be true? Because magic, fundamentally, is incomprehensible. That’s what the word means, even if we don’t necessarily think about it like that. In ancient times, when nobody understood electricity, an electric arc in the sky during a storm was divine magic; a fictional character throwing electric arcs from his hands was storytelling magic. Once we actually discovered the principles of electricity, and could generate electric arcs on demand, we ceased to call it magic. It is technology.
Thus the converse law:
Any sufficiently understood magic is indistinguishable from technology.
Until recently, virtually every human-invented1 tool or method to affect the world — every capability — was understood by at least some people. (The number of capabilities that we developed and then forgot rounds to zero.) So it doesn’t really make sense to call any real capability “magic.” If you’re talking about the perception of a subset of people, then sure; a helicopter might be “magic” to the uncontacted Sentinelese people, a quantum computer might be “magic” to you and me. But there are engineers and scientists who know how helicopters and quantum computers work in detail, and there’s a much wider set of people who at least understand their basic principles, so calling them magical would be corny and confusing.
As a result, the technology–magic metaphor is rare in serious contexts. It makes for fun blog posts and fun science fiction, but deliberately calling real technology “magical” is rarely a wise choice. Part of the reason is that it’s not very informative: if Elon Musk said that he was building “magical cars,” nobody would know wtf he’s talking about. It’s better to be precise and say “electric cars that can self-drive” or whatever. Magic is too generic.
Still, sometimes companies do use the magic metaphor in tech products, especially when they want to emphasize the idea of making things generically better. From a design perspective, this often takes the form of icons like a magic wand (🪄) and sparkles (✨). For instance, in Apple’s Photos app, you can click the magic wand to instantly make a picture better.
The reason it’s a magic wand and not some other icon is that you’re not supposed to care about what it does under the hood. If you want to manually adjust the brightness, saturation and contrast of your picture, you can. But if you just want to make it “better” according to some algorithm invented by Apple Inc., then you can just use the magic wand.
For some time before Elon Musk took it over, the mobile Twitter app also used magical iconography. The sparkle symbol in the top right allowed users to switch between a chronological feed, or an algorithmic one — showing the “top” tweets first.
Again, the implication is that you could, if you want, simply trust an incomprehensible algorithm to pick tweets for you. It’s not actually incomprehensible, there were presumably software engineers who knew what the algorithm did, but to users, it might as well be magical. Magic, in technological design, refers to processes that are too complicated for users to worry about. Just tap the sparkly icon, and everything will just be better somehow.2
Which brings us to AI. The large AIs of today — large language models like GPT-4 — are the ultimate incomprehensible technology. They’re made of giant deep neural networks, containing millions of “neurons” and a similarly large number of connections between them. Nobody truly understand what goes on in the depths of the networks, although we are actually making progress in that respect. AIs are alien minds, come into existence through being fed colossal amounts of data, rather than careful design by engineers.
AI is the closest we’ve gotten to “sufficiently advanced technology” as per Clarke’s law. It feels like magic in a way that previous technologies didn’t. It feels like magic even to its practitioners.
Given this widespread feeling, it’s perhaps not surprising that ✨sparkly✨ iconography has suddenly become much more common, especially as tech companies rush to find ways to integrate AI into their products.
I noticed it a few days ago while I was in a Zoom meeting. Zoom apparently allows you to use generative AI for meeting summaries and also an “AI companion” that presumably does something magical, as evidenced by the sparkles:
I’ve noticed it too in Notion, where they gave it the color purple to distinguish it from other options:
Interestingly, when you click, you get more specific icons depending on what you want… but there’s still a generic “improve writing” command that uses the magic wand, and a “simplify language” that just sparkles. I’m guessing their designer didn’t know how to represent the concept of simplification as a tiny icon.
Sometimes the sparkles are very subtle, but still there, like in the Arc browser’s new tab AI renaming feature:
Products that are based on AI also use sparkles and magic wands, even though they don’t necessarily need to distinguish AI from the rest. For instance, Lex is an AI-based word processor, and sure enough:
OpenAI itself, the heart of AI technocapital, chose to represent the superiority of its GPT-4 model over GPT-3.5 with sparkles:
One last example: in 2022, one of my friends tried to create an AI image generation startup (for which I wrote this post), and they chose, as their name… Sparkl:
I’ve seen more examples in a few articles that explored the sparkle-AI trend from a design perspective.3 As one of them points out, it’s nice that we’re finally moving away from the “brain with wires” icons that used to be the main way to represent AI.
But I do wonder about what this trend means for the future. Is it just a fad? It’s possible that AI becomes both more mundane and more understood, in which case the ubiquity of sparkles in 2023 may become a cringey cliché, a relic from a time when we were all hopelessly naïve about the promise and mechanics of AI. The sparkle ✨ will go the way of the floppy disk 💾, once the universally recognized symbol of saving files, now almost extinct because of auto-saving.
Or maybe the trend will continue, or accelerate. If AI becomes better at us at programming, at writing, at science, and especially at making new AIs, then it and its descendants will constantly create new technology that is incomprehensible to humans. Magic will flood the world, and for the first time it’ll be sufficiently advanced to be deserving the name. We’ll have cures for cancer that are, effectively, magic potions; we’ll have sources of energy that are the equivalent of shooting electric arcs with our hands. We’ll trust the AI just like we trust the engineers who come up with clever algorithms. We’ll be like our distant ancestors, living in a world that is wondrous and unexplainable, and, above all, sparkly.
There are capabilities that weren’t invented, but evolved by nature. We don’t always understand those well — most notably the inner workings of the brain. Such capabilities are almost never called technology and rarely called magic.
An older example is the use of “wizard” programs that are really just software installers. Again we see the metaphor of hiding complexity. Software installation can be a mess, but the wizard performs his magic and makes everything work.
See for instance An AI Icon Standard for Apps? by Luke Wroblewski (2023); The Unstoppable Rise of Spark ✨, as Ai’s Iconic Symbol, by Rishi Shah (2023); AI Iconography (Or, Does AI Sparkle?) by Jordan Rothe (2023); How Google is branding AI: Sparkles, Duet, and ‘generative’ by Abner Li (2023). Not sure why I bothered to write 2023 in all cases. Of course they’re all 2023.
So, if I can ‘make’ a fire I ‘understand’ it?
Especially the ones who know the most about how this place functions admit having no clue. If we really lose the ability to see the magic of lightning because we think we’ve got it, then we are definitely lost.
If you grow up through dismantling your childhood bed and thus proving ‘there are no monsters’ you didn’t yet get the magic stories you were told.... and better prepare for some nasty surprises.
Picking up wands and using them without a long apprenticeship...is exactly what Goethe tried to point at. It is quite okay to not understand and live in awe and wonder. It is horrific to use the excuse of knowing to destroy the old tree....
This is a good discussion to have but it is not just about AI, a hammer is no less magical than AGI. Any bird or insect, any cell outcompetes our clumsy attempts at recreating parts of that. We are the most stupid of all beings if we do not start to ‘understand’ our limitations. The intelligence of reality is indifferent to our IQ.
Are you familiar with Iain McGilchrist’s theory? Very simply said he states our brains hold two opposing views of the world. One produces technology, uses maps, representations, calculations, plans, solidity, opaqueness. The other experiences the world through the senses, sees similarities, patterns, wholeness, uses metaphor, humour, poetic language, makes translucent, is capable of clarity without reduction. Only one of the two includes the other. And that’s the ‘one’ that should be king but no longer is in our society.
Sorry, this rant is not directed at you, I enjoyed the post. Good spotting of a trend. This is just very alive in me, and the outsiders view of magic feels a bit triggering.....dealing with the monsters under the hood of technology...with the unknown at the edge of the map....keep going....
Great post. I'll pick one nit: You say.
"The large AIs of today — large language models like GPT-4 — are the ultimate incomprehensible technology. They’re made of giant deep neural networks, containing millions of “neurons” and a similarly large number of connections between them"
From my research, this is incorrect. Large language models run via software on standard computers, not neural nets. In fact, I just posted about this yesterday: https://speclectic.substack.com/p/sentient-aisyes-no-when
I especially like your conclusions though. I think we may well be headed for a magical AI world.