The future belongs to no-code and AI, and there's nothing you can do about it.

Systems should not become more difficult to interact with over time and complexity but instead, they should adapt and become more intuitive.
Alexander Eckhart
April 12, 2021
Reworked poster for the movie “Metropolis”
Reworked poster for the movie “Metropolis”

I still remember the first time watching the movie “Alien” (1979). I was around 12 years old, not very discerning but very into science fiction and astronomy.

Aside from the Xenomorph itself, which was unlike anything I’d ever seen before, I was completely fascinated by the world inhabited by the characters. It was dark, full of dirt, sweat, smoke, and of course, death. It seemed way different from the ones in “Star Wars”, “Star Trek” or even “2001: A Space Odyssey”. This world had plenty of twists and hidden meanings, and it wasn’t afraid to challenge the escape plans I’d make from the comfort of my couch. If we were ever going to have industrial space travel, this was probably how it was going to look like. Bulky machines, tight places, lots of condensation, and not much sightseeing.

As the years progressed, and with every occasional viewing, I started to notice more details about the “Alien” world.

For me, one of the movie’s biggest revelations was Ash, the ship’s science officer. It didn’t strike me as a huge reveal back then  — him being an android and all —  but looking back at it now I have to admit… its programming was pretty dope.

Compared to Nostromo’s central computer, which required a whole room to operate and only responded in short written sentences, Ash’s architecture must have been top-notch even for that century. They may have served different purposes, but if you think about it they were both user interfaces made by humans for humans to work with.

Mankind has always been fascinated by the idea of building something that can equal or even exceed its collective wit and strength.  Concepts of artificial servants and companions date at least as far back as antiquity and describe human-alike creatures made from stone, metal, or wood that would come to the aid of Gods and man in extraordinary circumstances. From automated soldiers (bhuta vahana yanta or “Spirit movement machines”) that were built to protect the relics of Buddha, to Aristotle pondering on the idea that automata could one day make possible the abolition of slavery, man has continued to refine itself in the mirror.

Looking at our history, one could assume that we would be doing a pretty awesome job building artificial stuff by now, right?

Well, partially…

We’ve been building industrial robots that can outmatch the precision of human movement for the better part of the century. We’ve developed materials that can look and feel like human skin and hair. We’ve become pretty good at energy storage and transport, too.

The behavioural part is still a bit tricky, tho.

True artificial intelligence (A.K.A. The Singularity) is not something you can program to behave like a human. That would be just another user interface designed for humans, like Nostromo’s central computer, the Terminator, Ash, Bishop, and even Rachael from Blade Runner.

Human behaviour is determined by many factors. From the bodies we are born in and the genetics we inherit, to the education we get and the way we interact with the world.

There’s a very high probability that we will program machines to mimic human behaviour and appearance down to the most authentic emotion and gesture. We will grow to love and hate them just as much as we do ourselves. We will probably debate about their rights and freedoms, and blur the lines between God and man to the point where they become the next steps in evolution.

But that still won’t be the Singularity.

Until we figure it out, we must continue to improve the user interfaces we create and empower people to adapt and grow.

Systems should not become more difficult to interact with over time and complexity but instead, they should adapt to their users and become more intuitive for them.

The best interface I can think of is a system that understands what we want to achieve and guides us towards the goal, no matter the language or the input method we use.

For humans, that’s human behavior.

Written, spoken, or interpreted, humans have evolved to communicate with others through language, logic, and emotions. A human will try his best to understand what someone else is trying to say, even if they don’t speak the same language. Our way of communicating with each other is driven by curiosity and non-verbal cues that we have learned to interpret.

A very good example of this sort of user interface is in the movie “Her” (2013). Similar to Ash, Samantha was advertised as “Not just an operating system, but a consciousness”. It lived in the pocked of your shirt and its purpose was to mimic human behaviour to a point where the users interacting with it would communicate more efficiently and become better at whatever they aimed to do.

The applications could be endless. From a simple assistant that can help you get shit done throughout the day, to a French digital professor that would help you learn the language faster by actually listening to you and adapting to your level.

The best exercise we can do right now is to ask ourselves what’s the next step in building a better user interface.

Is it a touchscreen with haptic feedback, that never gets scratched? Is it a no-code interface that helps people build stuff without having to code? Or is it a machine learning algorithm that you can configure to adapt to the information it receives and improve its answers?

That’s the first step towards building for tomorrow.