Bleeding Edge - curse or blessing?

Florian Lohmeier
Technology
#LargeLanguageModels
#TextAI
Cover Image for Bleeding Edge - curse or blessing?

The year was 2001, a good six years before the first iPhone came onto the market. The Wireless Application Protocol (WAP) had just become “established”. At that time, cell phones were still mainly used for making phone calls or as SMS messengers. Browsing was really out of the question. Nevertheless, I worked with my colleagues at the time on a “mobile” information portal. I still remember well how different display sizes and WAP standards made life difficult for us. By the time we integrated automatic localization to find the nearest pharmacy, we knew that we were way ahead with our application. Neither users nor companies were ready to embark on the WAP adventure. For me, however, it was groundbreaking. I enjoyed being involved with bleeding edge technologies right from the start. That hasn't changed to this day.

For us at NEOMO, artificial intelligence didn't just start with ChatGPT. Since our foundation in 2012, we have set ourselves the goal of “turning our experience from various projects in corporations and startups into useful products and effective services”. This is always associated with the urge to analyze new technologies, question previously used solutions and enrich available frameworks with our own modules.

We developed our first applications based on Large Language Models (LLM) in 2020 with T5 (more read more in our success story). With the spread of other models and interaction options such as ChatGPT, we have become increasingly involved with these technologies and their areas of application in companies. In this context too, we continue to strive to take our customers to the next level through the sensible use of modern solutions.

However, in discussions with customers or business partners, we usually hear phrases such as

“We're not that far yet”

or

“we need to clean up the base first”

But why is that the case? Why is there again or still such a huge discrepancy between modern technology and the actual software status in companies? In some cases, there may be a lack of imagination as to what can really be achieved by modernizing individual processes.

This is precisely the point we are currently addressing with our webinar series “Integrate AI deeply or leave it alone!”. For us, there is only one way forward. We take every new technology, every new model or framework apart and see it as our job to educate our customers and partners about the advantages and disadvantages of using them. And we're not talking about the next generic chatbot here. We are focusing on business processes that can either only be realized or scaled through the use of artificial intelligence.

Let's take the topic of “tenders”, for example. Numerous companies respond to one RFI, RFQ or RFP after another and devote vast amounts of time and human resources to it. We know what we are talking about, because of course we have also taken part in one or two tenders. In the long term, however, this effort is not in proportion to the size of our company. We have therefore started to automate this process for ourselves with the help of artificial intelligence.

By crawling different tenders and using our extraction modules, we pre-sort relevant candidates in a simple way. GPT models trained by us are then used to prepare the documents. In this way, we were able to set up a meaningful business process that could not have been realized in the long term using conventional methods.

The answer to the question of how we face bleeding edge should be clear - it is still our sweet spot.

Florian Lohmeier
CO-Founder NEOMO GmbH
Florian has always had a strong emphasis on all things visual and UX with a bias towards search-based applications.

More blog posts

Image

Current AI is like a 12-year-old colleague

The technology must be embedded in the process and adapted to the process and not float next to it unconnected. This also applies to large language models - although their "humanity" gives the impression that they can be treated like a colleague rather than a tool. But how many business areas benefit from a colleague with the intellectual capacity of a twelve-year-old?

Image

ChatGPT "knows" nothing

Language models are notoriously struggling to recall facts reliably. Unfortunately, they also almost never answer "I don't know". The burden of distinguishing between hallucination and truth is therefore entirely on the user. This effectively means that this user must verify the information from the language model - by simultaneously obtaining the fact they are looking for from another, reliable source. LLMs are therefore more than useless as knowledge repositories.

Image

Rundify - read, understand, verify

Digital technology has overloaded people with information, but technology can also help them to turn this flood into a source of knowledge. Large language models can - if used correctly - be a building block for this. Our "rundify" tool shows what something like this could look like.