“At the hospital they introduced a computer to do payroll. Instead of four people in the payroll department they now employ eight plus a few in the new computer department,” my father explained over Sunday lunch over 30 years ago.
My father worked at the time as medical doctor at the citizen’s hospital (Bürgerspital) in Solothurn, a midsize Swiss town. Back in the eighties the first wave of computers made their way into any large organization, quite often payroll was one of the first applications.
Payroll was a great problem for early computing: A relatively easy task – Take the salary number, deduct social security charges and issue a bank wire statement, highly repetitive – every month the same, and a volume play – the hospital employed hundreds of doctors, nurses, and auxiliary staff.
This was exactly what early computers where good at: straight forward repetitive tasks multiplied many times. While the computers mastered their designated assignment well, the actual challenge was to embed this new technology into the daily routine at the hospital’s payroll department.
This was difficult for several reasons. For starters, computers were not flexible, so you had to adjust processes to the computer and not the other way around. The early computers were quite big boxes and error prone, especially the matrix printers of the time. Next to the new technology challenge you had a change management process on your hands. The answer quite often: You had the old and new processes run in parallel for a while to account for teething troubles.
Some thirty years later we are at the dawn of a similar transformational phase: The advent of Artificial Intelligence, or AI in short. Every day someone claims another AI breakthrough. It’s the age of intelligent machines, automating everything and taking over the world, or so we’re told.
I beg to differ.
Instead I see a bit of history repeating itself here. Let me explain.
While Siri, Cortana, and similar services start to be great at simpler tasks, they stumble easily if asked more complex questions, or questions without context. Elsewhere you see early forms of machine learning and neural networks applied to repetitive tasks such as identifying cat pictures with limited results.
Let’s inject a dose of pragmatism: Currently it’s faster computing multiplied with clever algorithms.
And many of these algorithmic methods have been around for a while. For example, one of our core USPs, the catalyst detection is built on the shoulder of giants. The work of Robertson-Sparck-Jones. The main body of this work has been developed in the 1980 sand 1990s. We expanded the work on probabilistic retrieval and applied machine learning techniques in the period 2012-2014 together with folks from the University of Applied Science of Zurich and ever since.
Here at Squirro we deploy these techniques to do more with data. And some of the results are quite amazing. Have you seen a system not knowledgeable of what the Octoberfest is to accurately predict the next, using no more than a few time buckets of available data?
To quote Marc Vollenweider from Evalueserve: “We’ve tested 25 #AI engines & only @Squirro brought benefit.”
Independent evaluation from the market gives rigor and context to such new technologies and helps to distinguish today’s useful and practical AI, from the hyperbole that surrounds it. We were not aware during the initial assessment phase at Evalueserve of their selection process. And obviously humbled and proud about it’s outcome.
The AI story has a long way to go and won’t make that journey unassisted. As Marc explains in his recent book ‘Mind + Machine’ the phase of combining the best of machines and human brains is where we can expect to see great value. I agree.
To a certain extent, it’s the same story from 30 years ago. The promise of AI is real.
Within a limited compound, some AI products already produce stunning results. A good example is the progress made of applying AI to language translation. But they are still limited in scope and limited to this box. The everyday embedding piece is often missing.
We’re living in an age of experimentation. Along the we learn of every stumble and hiccup and that’s a good and necessary thing for any new technology. Exposure of limitations de-mystifies it over time, making it more likely to be accepted, adopted and find its most useful and practical applications.
PS: Reach out to us, as we’ve gained considerable experience in deploying AI solutions providing tangible results.
PPS: Would I be an aspiring economist I’d explore the reapplication of Solow’s thesis.
PPPS: Image credits