Tech & Innovation

The History of the AI Hype

What it teaches us for the future

Paul Wittig

11. Jun 2025 | 3 min.

Der Mensch und die Künstliche Intelligenz KI arbeiten gemeinsam an einer komplexen Aufgabe

The topic of AI is currently on everyone’s mind, mainly driven by large language models (LLMs) such as ChatGPT or image generators such as Dall-E and Midjourney. There is a lot of talk about where the development of AI will take us. Which professions will soon no longer require human labour? What are the dangers of general artificial intelligence (AGI)?

The First AI Hopes in Retrospect

People tend to forget that this is not the first time we have asked ourselves these questions. The term ‘artificial intelligence’ was first used back in the 1950s. In 1959, Herbert A. Simon, J. C. Shaw and Allen Newell published the ‘General Problem Solver’ (GPS), a computer programme designed to solve problems of all kinds. The attempt was ultimately considered a failure as the programme could only reliably solve simple, well-defined problems. Nevertheless, AI euphoria broke out in the 1960s and in 1963 Leonard Uhr and Charles Vossler described one of the first programmes for machine learning. In 1965, Joseph Weizenbaum built ELIZA, an interactive program designed to conduct any dialogue in English.

Ausschnitt von dem Programm ELIZA
Extract from the ELIZA programme, source: Wikipedia

You can imagine the enthusiasm, but also the fears, with which people reacted to these developments at the time. The development of AI in the 1960s had a major influence on the science fiction writers of the time. It was assumed that previous developments could be continued at the same or even increased speed.

In interviews from the time, we like to hear ideas about the AI-dominated world in 10-15 years’ time: Robots are everywhere and do all the annoying jobs, but are also independent beings due to their intelligence. (This went so far that one scientist described how he expected his daughter to marry a robot rather than a human in 10 to 15 years’ time).

Somehow it all sounds familiar, doesn’t it? Since the release of ChatGPT, similar tones have been struck again, entire industries have been declared outdated or scenarios of the extinction of humanity by AI have been created.

Early AI Visions and Their End

So the question is: why did the predictions of the 1960s not materialise?

Paul’s answer is quite simple: expectations were exaggerated. It was assumed that the great leaps that made rapid progress possible could simply be repeated, but they were disappointed. Development slowed down rapidly, sceptics became louder and so the AI winter followed in the 1970s. Advances in expert systems led to a recovery in the field in the early 1980s, but here too the high expectations could not be met. The second AI winter followed until the 2010s.

chat bot

In 2017, ‘Attention Is All You Need’ was published, a research paper by Google that described a new deep learning architecture, the Transformer. Almost all LLMs are still based on the Transformer architecture today. Looking at the development of language models since GPT-3 in 2020, it can be said that much of the development has been evolutionary – not revolutionary – in nature and many advances in the capabilities of these models are due to greater scaling with more computing power. According to a survey conducted by the Association for the Advancement of Artificial Intelligence in March 2025, 76% of AI researchers surveyed do not believe that this approach can lead to general artificial intelligence (AGI).

New, fundamental innovations are therefore needed to fulfil the current expectations for the development of AI. However, no one will be able to reliably predict if and when these will materialise.

So will there be a third AI winter? Perhaps. But not to the same extent as we saw in the 70s and 90s. LLMs and other generative models already have a lot of untapped potential. They will not replace humans – at least for the time being – but they can support us.

Open AI for Everyone

While frontier labs such as OpenAI or Anthropic are using more and more resources for ever smaller developments, others are focusing on packaging existing capabilities into ever smaller and more efficient models that can then be operated outside of huge data centers – in some cases even on smartphones.

Diagram of Cost of the Cheapest LLM with a Minimum MMLU Score on a Log Scale

Many of these models are published as open source or open weight models and can also be operated on a company’s own infrastructure, allowing companies and private individuals to remain independent and flexible when using AI. In addition, companies can retain control over their own data when operating their own systems – a particular hurdle when using current AI offerings. This democratisation of access to AI will continue to progress and bring with it its very own innovations. The focus on major leaps towards artificial general intelligence, which should ultimately be almost indistinguishable from humans, seems to overshadow this potential in people’s minds.

The figure above shows a logarithmic diagram depicting the declining costs of LLMs with constant capabilities over the period from 2022 to 2024 (Source: Andreessen Horowitz).

Are You Interested in the Topic of AI and Would like to Gain More Insights?

Then take a look at our other blog articles on the use and development of AI, for example in the collaboration or infrastructure environment. Keep an eye on our blog and look forward to more articles on this and other innovative topics. Many thanks to Paul for this exciting article.